Update: I’m slowly getting there, third and more accurate diagram attached below which now includes where security policies, iptables and network namespaces are deployed. I’ve now also removed the previous two incorrect diagrams as they seem to be popping up on Google.
This post is just a quick response to a comment by Turing Machinæ on Show 227 – OpenStack Neutron Overview with Kyle Mestery, which was “I’ve learnt absolutely NOTHING about openstack from this podcast.” Whilst I don’t agree I have some empathy; time and time again I’ve found myself hitting a brick wall recently when trying to understand ‘the new hotness’ where cloud, Linux and a host of other technologies are concerned. I want some real detail, I want to understand things at a fundamental level so I can gain a full understanding. I really do “wanna get DIRTY!”
Blogs, podcasts, manuals, wikis, whatever; very few are focused on the low level network detail and implementation. Little is written with a network engineer in mind, its all a black box. As a colleague said to me just today “I just click next”. This is possible because the detail is abstracted and most server/dev/ops/sysadmin folk simply need to get a subnet or two allocated, enter it in a web GUI and… click next. There are clearly people out there who fully understand the network aspects related to these products, but mostly, I don’t think they are ‘network people’.
Aside from the obvious negative career implications (your skills are only required to build the underlay), it raises an interesting point around abstraction and understanding of the underlying components. I suspect that in the same way that I don’t think about or care about network card drivers any more, no-one cares about how networking is implemented in the new stack.
Just to drive this point home, here’s my take on how OpenStack networking is provided, in proper, low-level network detail, when using OpenStack Havana and Mirantis Fuel (to provide simplified, automated builds). Keep in mind that this relates to a single physical host.
You’ll note that, despite a few days research, I still don’t fully understand quite how br-int (and the guest for that matter) talks to anything other than the br-ex based network. Quite why there are two connections from br-int to br-ext is also a mystery. I’d love to fully understand all this, so please get in touch if you can fill me in – for everyone’s benefit. I’ll update the diagram as and when I understand more.
Of course, please keep in mind this diagram is not $gospel, is incomplete and possibly very wrong (hopefully not for long).
I’d also suggest you keep in mind that this diagram only represents a single physical host. Add in a server blade, converged system (say UCS), your ToR switches and all the rest and you have a golden ticket to troubleshooting and performance hell (perhaps). Bridges are all OVS. Its not all negative; that fact that this is even possible on Linux is pretty impressive and certainly demonstrates what is possible. I’ve also no doubt (as mentioned next) that massive improvements are on the way where both OpenStack, containers and other technologies are concerned.
One last thing, I’m aware that the Juno OpenStack release includes ML2 which may simplify things somewhat.
–Removed to prevent incorrect information being propagated by Google
Just one more thing; where is the configuration for all of this stored? How would you back it up? How do you monitor all this?
Here’s an updated version (one day later). I’m pretty sure I’ve been getting confused by differences in how Controller, Network and Compute nodes are networked. Features such as DVS and L3 HA change the picture quite a bit too. It’s too early to cover those but I will in time.
–Removed to prevent incorrect information being propagated by Google
Here’s a much improved version (three weeks later), with the compute and network node architectures split out. I’ve still got work to do as this diagrams assumes VLAN separation and I’d also like to accommodate GRE and VXLAN tunnel use. Note I’m showing two ‘tenants’, each with one guest, VLAN and ‘router’;
I guess the talk was more about the strategy of neutron. To be honest, have never seen gory details about neutron in open and reasons for such an architecture. This is one of the issues for deployment.
Packet Pushers Weekly is a deep dive technology show and we simply aren’t focussed on teaching basics. We want to talk about the stuff that no one else does.
If you want to learn OpenStack then buy some books. Lots of good ones, vendors have some good manuals and there are plenty of blog posts on installing and using OpenStack.
Fully appreciate your comments regarding the podcast, I’d hoped I’d made it clear my more negative comments were not directed at it.
Looking ahead, I hope to rectify what I see as the lack of good information out there by producing it myself.
Fully appreciate your comments regarding the podcast, I’d hoped I’d made it clear my more negative comments were not directed at it.
Looking ahead, I hope to rectify what I see as the lack of good information out there by producing it myself.
I’ve got few crap books that are not worth reading.
There is great information about details of Networking in Openstack alongwith commands
https://www.rdoproject.org/Networking_in_too_much_detail
Indeed Suraj, that was one of the resources of my research (the RDO site has been very useful), however, it omits many key details and does not provide a the full picture. Too much detail is not enough! 🙂 For instance, the G-H connection is never covered and you’ll also note there is not a single physical interface on the diagram.
Steven Iveson wrote: ‘This is possible because the detail is abstracted and most server/dev/ops/sysadmin folk simply need to get a subnet or two allocated, enter it in a web GUI and… click next. There are clearly people out there who fully understand the network aspects related to these products, but mostly, I don’t think they are ‘network people’.’
Absurd. This statement does more to spread ignorance as it does to add any sort of real world expertise to the subject. When in fact, it is the ‘network people’ (you know them, those brutes} that lead the SDN effort – along with the old-school Unix/Linux programmers (c/c++).
Hey Mickey. Are you saying the abstraction isn’t there? That users typically hand craft things? I’m not sure what point you are trying to make regarding this?
Is it ‘network people’ – if so, who are they? You may have a point, how many network ‘users’ know who David Miller is? How many Linux users do for that matter? There’s a real disconnect between those who create and those who consume – even in the open source world. I’m all ears on how this situation can be improved.
Cheers
Thanks Steven for posting the article , very informative indeed.
Hey Ranjeet,
You’re welcome and thanks for saying so. Do keep in mind things have moved on (I expect) quite a bit since I wrote this and are hopefully much better.