F5 Network’s Traffic Management Operating System (TMOS) is, first and foremost and for the sake of clarity, NOT an individual operating system. It is the software foundation for all of F5’s network or traffic (not data) products; physical or virtual. TMOS almost seems to be a concept rather than a concrete thing when you first try to understand it. I’ve struggled to find a truly definitive definition of TMOS in any manual or on any website.
So, what is TMOS? It’s not too tough after all, really; TMOS encompasses a collection of operating systems and firmware, all of which run on BIG-IP hardware appliances or within the BIG-IP Virtual Edition. BIG-IP and TMOS (and even TMM) are often used interchangeably where features, system and feature modules are concerned. This can be confusing; for instance, although LTM is a TMOS system module running within TMM, it’s commonly referred to as BIG-IP LTM. I suspect we have the F5 marketing team to thank for this muddled state of affairs. See the comments section for some clarification from F5 but some debate too.
TMOS and F5′s so-called ‘full proxy’ architecture were introduced in 2004 with the release of v9.0. This is essentially where the BIG-IP software and hardware diverged; previously the hardware and software were simply both referred to as BIG-IP (or BIG-IP Controller). Now, the hardware or ‘platform’ is BIG-IP, and the software TMOS. Anything capable of running TMOS and supporting it’s full proxy counts as a BIG-IP so the virtualised version of TMOS is called BIG-IP Virtual Edition(VE) rather than TMOS VE. Where the VE editions are concerned, just the TMM and HMS software components of TMOS are present (more details next).
TMOS encompasses all of these software components;
- TMM; The Traffic Management Microkernel which includes;
- Software in the form of an operating system, system and feature modules (such as LTM), other modules (such as iRules) and multiple network ‘stacks’ and proxies; FastL4, FastHTTP, Fast Application Proxy, TCPExpress, IPv4, IPv6 and SCTP.
- Software in the form of the connection between TMM and the firmware that operates the dedicated SSL card and others.
- An SSL stack.
- Interfaces to the HMS.
- FastL4; Packet-based ‘half-proxy’ functions, mostly incorporated in hardware ASICs (PVA) or FPGAs on hardware platforms (software only in VE).
- HMS; The Host Management Subsystem; this runs a modified version of the CentOS Linux operating system and provides the various interfaces and tools used to manage the system such as the GUI Configuration Utility, Advanced (Bash) Shell, tmsh CLI, DNS client, SNMP, NTP client and more.
- AOM; Always On Management; a lights-out management system accessible through the management network interface and serial console only. This is independent of the HMS (despite the shared network interface) and can be used to reset the device.
- MOS; A Maintenance Operating System; used for disk management, file system mounting and related maintenance tasks.
- EUD; End User Diagnostics; used to perform BIG-IP hardware tests.
- LTM; This and other ‘feature’ modules such as GTM and APM expose specific parts of TMM functionality when licensed. They are typically focussed on a particular type of service (load balancing, authentication and so on).
So, that’s five operating systems* (I’m not actually counting LTM etc.) and related interfaces to understand. It sounds more complex that you might think; your average server has a BIOS (a bit like the EUD), a RAID BIOS (the MOS) and an ILO or DRAC card (the AOM) and, along with the OS you install, that’s four already. Let’s go into some further detail on each of these components.
Traffic Management Microkernel (TMM)
TMM is the core component of TMOS as it handles all network activities and communicates directly with the network switch hardware (or vNICs for VE). TMM also controls communications to and from the HMS. Local Traffic Manager (LTM) and other modules run within the TMM.
TMM is single threaded until TMOS v11.3; on multi-processor or multi-core systems, Clustered Multi-Processing(CMP) is used to run multiple TMM instances/processes, one per core. From v11.3 two TMM processes are run per core, greatly increasing potential performance and throughput.
TMM shares hardware resources with the HMS (discussed next) but has access to all CPUs and the majority of RAM.
Utilised via a FastL4 profile assigned to a Performance (Layer 4) Virtual Server. The FastL4 profile essentially provide the original (first generation load balancer) packet-based (packet-by-packet) layer-four transparent forwarding half-proxy functionality used prior to TMOS and LTM v9.0. On hardware platforms this is mostly performed in hardware (providing very high performance); with VEs this is done in software but still significantly faster than a standard L7 Virtual Server.
Host Management Subsystem (HMS)
The Host Management Subsystem runs a modified version of the CentOS Linux operating system and provides the various interfaces and tools used to manage the system such as the GUI Configuration Utility, Advanced (Bash) Shell, tmsh CLI, DNS client, SNMP and NTP client and/or server.
The HMS can be accessed through the dedicated management network interface, TMM switch interfaces or the serial console (either directly or via AOM).
HMS shares hardware resources with TMM but only runs on a single CPU and is assigned a limited amount of RAM.
Always On Management (AOM)
The AOM (another dedicated hardware subsystem) allows for ‘lights out’ power management of and console access to the HMS via the serial console or using SSH via the management network interface. AOM Is available on nearly all BIG-IP hardware platforms including the Enterprise Manager 4000 product, but not on VIPRION. Note AOM ‘shares’ the management network interface with the HMS.
Maintenance Operating System (MOS)
MOS is installed in an additional boot location that is automatically created when TMOS version 10 or 11 is installed. MOS, which runs in RAM, is used for disk and file system maintenance purposes such as; drive reformatting, volume mounting, system reimaging and file retrieval. MOS also supports network access and file transfer.
MOS is entered by interrupting the standard boot process via the serial console (by selecting TMOS maintenance at the GRUB boot menu) or booting from USB media.
The grub_default -d command can be used to display the MOS version currently installed. Note, only one copy of MOS is installed on the system (taken from the latest TMOS image file installed) regardless of the number of volumes present.
End User Diagnostics (EUD)
EUD is a software program used to perform a series of BIG-IP hardware tests – accessible via the serial console only on system boot. EUD is run from the boot menu or via supported USB media.
Here’s a diagram that brings it all together visually.
And another that demonstrates the different ‘planes’;
I hope this article helps clarify and explain what TMOS is all about; I know I was confused for years and understanding the true nature of TMOS has certainly helped me better understand and think more clearly about a great but ultimately complex product.
*As of v11 all these operating systems are 64-bit.