What The Heck Is F5 Networks’ TMOS?

F5 Network’s Traffic Management Operating System (TMOS) is, first and foremost and for the sake of clarity, NOT an individual operating system. It is the software foundation for all of F5’s network or traffic (not data) products; physical or virtual. TMOS almost seems to be a concept rather than a concrete thing when you first try to understand it. I’ve struggled to find a truly definitive definition of TMOS in any manual or on any website.

So, what is TMOS? It’s not too tough after all, really; TMOS encompasses a collection of operating systems and firmware, all of which run on BIG-IP hardware appliances or within the BIG-IP Virtual Edition. BIG-IP and TMOS (and even TMM) are often used interchangeably where features, system and feature modules are concerned. This can be confusing; for instance, although LTM is a TMOS system module running within TMM, it’s commonly referred to as BIG-IP LTM. I suspect we have the F5 marketing team to thank for this muddled state of affairs. See the comments section for some clarification from F5 but some debate too.

TMOS and F5′s so-called ‘full proxy’ architecture were introduced in 2004 with the release of v9.0. This is essentially where the BIG-IP software and hardware diverged; previously the hardware and software were simply both referred to as BIG-IP (or BIG-IP Controller). Now, the hardware or ‘platform’ is BIG-IP, and the software TMOS. Anything capable of running TMOS and supporting it’s full proxy counts as a BIG-IP so the virtualised version of TMOS is called BIG-IP Virtual Edition(VE) rather than TMOS VE. Where the VE editions are concerned, just the TMM and HMS software components of TMOS are present (more details next).

TMOS encompasses all of these software components;

  • TMM; The Traffic Management Microkernel which includes;
    • Software in the form of an operating system, system and feature modules (such as LTM), other modules (such as iRules) and multiple network ‘stacks’ and proxies; FastL4, FastHTTP, Fast Application Proxy, TCPExpress, IPv4, IPv6 and SCTP.
    • Software in the form of the connection between TMM and the firmware that operates the dedicated SSL card and others.
    • An SSL stack.
    • Interfaces to the HMS.
  • FastL4; Packet-based ‘half-proxy’ functions, mostly incorporated in hardware ASICs (PVA) or FPGAs on hardware platforms (software only in VE).
  • HMS; The Host Management Subsystem; this runs a modified version of the CentOS Linux operating system and provides the various interfaces and tools used to manage the system such as the GUI Configuration Utility, Advanced (Bash) Shell, tmsh CLI, DNS client, SNMP, NTP client and more.
  • AOM; Always On Management; a lights-out management system accessible through the management network interface and serial console only. This is independent of the HMS (despite the shared network interface) and can be used to reset the device.
  • MOS; A Maintenance Operating System; used for disk management, file system mounting and related maintenance tasks.
  • EUD; End User Diagnostics; used to perform BIG-IP hardware tests.
  • LTM; This and other ‘feature’ modules such as GTM and APM expose specific parts of TMM functionality when licensed. They are typically focussed on a particular type of service (load balancing, authentication and so on).

So, that’s five operating systems*  (I’m not actually counting LTM etc.) and related interfaces to understand. It sounds more complex that you might think; your average server has a BIOS (a bit like the EUD), a RAID BIOS (the MOS) and an ILO or DRAC card (the AOM) and, along with the OS you install, that’s four already. Let’s go into some further detail on each of these components.

Traffic Management Microkernel (TMM)

TMM is the core component of TMOS as it handles all network activities and communicates directly with the network switch hardware (or vNICs for VE).  TMM also controls communications to and from the HMS. Local Traffic Manager (LTM) and other modules run within the TMM.

TMM is single threaded until TMOS v11.3; on multi-processor or multi-core systems, Clustered Multi-Processing(CMP) is used to run multiple TMM instances/processes, one per core. From v11.3 two TMM processes are run per core, greatly increasing potential performance and throughput.

TMM shares hardware resources with the HMS (discussed next) but has access to all CPUs and the majority of RAM.

FastL4

Utilised via a FastL4 profile assigned to a Performance (Layer 4) Virtual Server. The FastL4 profile essentially provide the original (first generation load balancer) packet-based (packet-by-packet) layer-four transparent forwarding half-proxy functionality used prior to TMOS and LTM v9.0. On hardware platforms this is mostly performed in hardware (providing very high performance); with VEs this is done in software but still significantly faster than a standard L7 Virtual Server.

Host Management Subsystem (HMS)

The Host Management Subsystem runs a modified version of the CentOS Linux operating system and provides the various interfaces and tools used to manage the system such as the GUI Configuration Utility, Advanced (Bash) Shell, tmsh CLI, DNS client, SNMP and NTP client and/or server.

The HMS can be accessed through the dedicated management network interface, TMM switch interfaces or the serial console (either directly or via AOM).

HMS shares hardware resources with TMM but only runs on a single CPU and is assigned a limited amount of RAM.

Always On Management (AOM)

The AOM (another dedicated hardware subsystem) allows for ‘lights out’ power management of and console access to the HMS via the serial console or using SSH via the management network interface. AOM Is available on nearly all BIG-IP hardware platforms including the Enterprise Manager 4000 product, but not on VIPRION. Note AOM ‘shares’ the management network interface with the HMS.

Maintenance Operating System (MOS)

MOS is installed in an additional boot location that is automatically created when TMOS version 10 or 11 is installed. MOS, which runs in RAM, is used for disk and file system maintenance purposes such as; drive reformatting, volume mounting, system reimaging and file retrieval. MOS also supports network access and file transfer.

MOS is entered by interrupting the standard boot process via the serial console (by selecting TMOS maintenance at the GRUB boot menu) or booting from USB media.

The grub_default -d command can be used to display the MOS version currently installed. Note, only one copy of MOS is installed on the system (taken from the latest TMOS image file installed) regardless of the number of volumes present.

End User Diagnostics (EUD)

EUD is a software program used to perform a series of BIG-IP hardware tests – accessible via the serial console only on system boot. EUD is run from the boot menu or via supported USB media.

Here’s a diagram that brings it all together visually.

BIG-IP Architecture - Software

And another that demonstrates the different ‘planes’;

TMOS Planes2

I hope this article helps clarify and explain what TMOS is all about; I know I was confused for years and understanding the true nature of TMOS has certainly helped me better understand and think more clearly about a great but ultimately complex product.

*As of v11 all these operating systems are 64-bit.

Steven Iveson

Steven Iveson

Steven Iveson, the last of four children of the seventies, was born in London and has never been too far from a shooting, bombing or riot. He's now grateful to live in a small town in East Yorkshire in the north east of England with his wife Sam and their four children. He's worked in the IT industry for over 15 years in a variety of roles, predominantly in data centre environments. Working with switches and routers pretty much from the start he now also has a thirst for application delivery, SDN, virtualisation and related products and technologies. He's published a number of F5 Networks related books and is a regular contributor at DevCentral.
Steven Iveson
Steven Iveson
Steven Iveson

Latest posts by Steven Iveson (see all)

  • Stephen Stack

    Great article again Steven. Man, this is one of the biggest pain points i had starting out with F5. It’s an LTM, wait, no…. BigIP… right? well, yes, but LTM is a module and TMOS is… and so on. This is a great explanation, and is doing the rounds in our office.

    • http://twitter.com/sjiveson What Lies Beneath

      Thanks Stephen. I didn’t even try and work this out until I started writing a book on the subject last year (unreleased at present). Even then, despite many, many hours of research over many months I didn’t put all the pieces together until a very kind person (who unfortunately I can’t name) gave me just enough ‘extra’ information for it all to make sense. Of course, it all seems obvious now.

  • John

    Thanks for the article. I work for f5 in field engineering. Your article is probably a good sign we need to clean up the message if it is not clear. I’ll take that up the hill immediately.

    First off, BigIP is the trademarked product marketing name for all things f5 based on our full proxy architecture. That’s it. It’s our ‘golden arches’.

    Here is my too wordy technical explanation. Apologies in advance ..

    I always explain our technical terms if the form of management, control, and data plane elements. Management plane elements interact with external actors to effect the system. Control plane elements form the basis to control the various aspects of a system. Data plane elements actually handle the traffic being managed for end users.

    TMOS is the software ecosystem which forms the management, control, and dataplane of BigIP solutions.

    As for hardware, we have various hardware components including switching fabric chips, our older home grown ASICs for connection offloading called PVA, our newer FPGA based offload technologies for offload and security, SSL/TLS offload ASICs, network processor for things like compression offloading, and generalized processors. These are means to an end and are driven by TMOS components.

    TMMs are real-time software microkernels which form the overall L4-L7 intelligence for the data plane. If it’s in the TMM, it’s there to help push data traffic. We create clusters of these TMMs to linearly scale the traffic management data plane. TMM have direct driver level integration to much of our hardware. Think speed. It’s software which thinks like a switch.

    We have some data plane software which does not run inside of the TMM for a multitude of reasons. These are called plugins base services. These include our WAF (product name ASM) which spends a great deal of its life consulting policy elements, and our acceleration based technologies which make use of other general I/O devices like disks. Think intense compute flexibility.

    The rest of the TMOS eco-system is there for management and control plane functionality. Management plane clients include our SNMP agents, iControl service, GUI services, and TMSH clients. Control plane services include switchboard controllers, routing daemons, configuration services, etc.

    I hope this isolation into management, control, and data plane components helps the understanding of what’s all in TMOS.

    • http://twitter.com/sjiveson What Lies Beneath

      Hi John. Thanks for taking the time to comment and educate, it’s

      appreciated. Let’s quickly deal with my pedantic side first: the trademark is actually BIG-IP.

      Swiftly moving on, I think this is mainly an issue because in some ways it’s not essential knowledge; as an F5 customer for six years I never bothered asking, the answer wasn’t going to enlighten anyone or solve any problems. Of course, understanding the full complement of software components that make up TMOS (perhaps I should call it BIG-IP TMOS now) is more relevant. I was unaware of MOS and AOM (and SCCP before it) for many years and would certainly benefited had I known about them.

      Equally, I think it’s a common and enduring mistake for most to believe the HMS is the device’s operating system. I’ve incorrectly answered many an auditors question along these lines! There are many possible reasons for this I can imagine, most of which are valid (although some commercial); a few books from F5 for those that want to explore further would probably help.

      I don’t find your take on things too enlightening myself. I get your point about the BIG-IP ‘brand’ but let’s be clear, when I buy an ADC from F5, I’m buying a BIG-IP switch (with TMOS software) right? Same for a VE (although it’s less clear cut). TMOS and more specifically TMM provides the full proxy functionality. Happy to be corrected here.

      Also on the hardware side, I’ve read a few bits from Don regarding FPGAs but I’m not clear on where they are used at present, could you be more specific? I appreciate your points on TMM; it clarifies a few things for me. You still don’t describe what LTM (or other modules ignoring ASM etc.) are actually referred to as? TMM ‘native’ modules perhaps?

      Personally I find the segmentation of function by ‘plane’ rather confusing in this case. I really struggle to see the difference between control and data. Management functions are relatively well defined but what, for instance, is an iRule? I’d assume control so does that mean a connection handled by an iRule passes through the control and data planes. How would I relate this all to the SDN forwarding plane?

      In summary, I appreciate your time and effort but I have to say it’s frustrating I always seem to have a few ‘outstanding’ questions.

      • http://twitter.com/sjiveson What Lies Beneath

        I’ve updated the diagram to include LTM etc. (but I’ll ignore ASM – interested to know if it can take advantage of SMP or CMP?).

      • John Gruber

        P { margin-bottom: 0.08in; }

        I’ve had a bit of a problem with the
        posting system and posting with just my email…. so I’ll let your
        Google integration find me this time. I was also on the road all
        week.. Sorry for the delay in response.

        Please let me
        backtrack a bit and see if I can answer the original question better
        than I did before.

        f5 creates TMOS (Traffic Management
        Operating System) which indeed is the operating system which will run
        on various BIG-IP platforms. These BIG-IP platforms are either
        combinations of specific hardware components or are specific
        virtualization environments for our virtual editions. Be it
        hardware or a virtualization environment, the distinguishing aspect
        is that they can run some version of TMOS. If you would like to
        see a specific platform’s designation, cat /PLATFORM on any TMOS
        instance and you will see what TMOS’ HAL functionality discovered.

        As an example of what makes up a BIG-IP platform, if TMOS
        runs on hardware which contained our older network processors (the
        PVA2 and PVA10 ASICS) and a packet-by-packet virtual service is
        provisioned,connection management offloading can happen. (these are
        virtual services with fastL4 profile which do not use the standard
        dual TCP stack, session level, presentation level, or application
        level processing modules) If the same virtual service was
        provisioned on a BIG-IP platform without the PVA processors, the
        packet-by-packet service would be handled in software (it’s still 6x
        faster than if you need full application layer processing). If you
        run TMOS on a BIG-IP platform with specific TLS/SSL processors on it,
        then you can offload session key generation and bulk crypto for
        TLS/SSL session/presentation layer processing. If the same service
        was provisioned on a BIG-IP platform without the TLS/SSL processors,
        the service would run in software. The same goes for compression
        offloading.. or any other thing we learn to offload to hardware. The
        TMOS environment means you can interact with the management and
        control services the same no matter if you are dealing with an all
        software implementation or hardware offloading on a 48 core VIPRION
        cluster. In that sense TMOS is very much f5′s operating system and it
        abstracts many BIG-IP platform details.

        On specific f5 BIG-IP
        platforms, merchant silicon switch fabrics are utilized. On
        others, newer, BIG-IP platforms, network processing NICs are used. In
        virtualized environments we take advantage of para-vitalization
        interfaces. In still other virtualization environments we depend on
        fully virtualized NICs. It all depends on the performance
        needed. The good news is we can take you from our software only
        BIG-IP platforms to multiple hundred Gbps BIG-IP hardware platforms
        and they all take the same virtual service definitions at L4-L7.

        The
        way f5 can scale you from software to huge distributed clusters is
        because we constructed our TMOS technology to scale-out all the way
        back from v9 days (2004). In v9 days, we shipped our first
        platforms which had hardware ‘flow’ disaggregation/re-aggregation
        functionality. This allowed us to perform cluster multi-processing
        (CMP) in multiple TMMs (traffic management microkernel) all process
        traffic. This technology gets really interesting when we let the dis-aggregation/aggregation span across hardware blades in a chassis. We call that
        technology VIPRION.

        Today, on certain BIG-IP platforms,
        we use FPGAs to perform the disaggregation/re-aggregation
        functionality, do assisted mode packet-by-packet offloading (ePVA),
        do TCP SYN cookie checks, and perform other interesting security
        tricks. What we do in FPGAs continues to grow as we need it to. If
        you buy a BIG-IP platform with FPGAs, you can take advantages of
        those offload services, but you really don’t have to know they are
        there to take advantages of them because TMOS detects them and
        utilizes them. That’s the joy of TMOS. In the future when
        generalized processing support massive parallelism with TMM without
        heating the planet too much, we might choose the software flexibility
        of that technology over FPGAs. TMOS will let us do that. Either
        way we keep growing, scaling, and reaching out to new
        technologies. We have lived through more than one generational sway of the technology.

        Now for BIG-IP non-platform products…

        TMOS
        includes software and services which can perform many different
        feature aspects. f5 bundles those features into licensed BIG-IP
        products. If you want to see what TMOS features you are
        licensed to use on any given BIG-IP platform, look at the enabled
        list in the /config/bigip.license file. Any given BIG-IP product is
        the packaging of TMOS features.

        BIG-IP Local Traffic Manager
        (LTM) is the TMOS feature bundle targeted towards in-datacenter ADC
        functionality.

        BIG-IP Global Traffic Manager (GTM) is
        the TMOS feature bundle targeted towards multi-datacenter global
        traffic management.

        Other BIG-IP TMOS feature bundles
        include:

        BIG-IP ASM – Application Security Manager, our WAF
        feature set.
        BIG-IP APM – Application Policy Manager, our
        multifaceted SSL VPN and Identity solutions
        BIG-IP AFM -
        Application Firewall Manager, our network firewalling solutions.

        We
        have more… ….

        Again.. BIG-IP products are TMOS
        feature bundles which cater to specific traffic management
        needs.

        You can decided to run multiple feature bundles (which
        can overlap!), on the same BIG-IP platform. So you can have
        BIG-IP LTM+GTM+APM all on the same BIG-IP platform. f5 is an
        engineering company, so we might tell you that reasonably you should
        not run every TMOS features unless the BIG-IP platform has enough
        processing, RAM, and disk I/O to handle it. That’s where we
        differ for others in this space. We get beat up over this by
        our competition, but our customers and support division like things
        that work. (Frankly, most of our competition call out things as
        ‘product’ that we throw into our base product, like HTTP caching.
        When f5 differentiates a product by a feature bundle, there is a
        compelling service reason to do so.)

        In summary:

        BIG-IP
        platforms = something that can run TMOS. I guess the term
        ‘BIG-IP switch’ can apply because there is no instance of TMOS that
        does not perform L2 frame forwarding.

        BIG-IP products = some
        licensed bundle of TMOS features which is suited to a specific
        traffic management needs.

        I wanted to comment on the idea of
        a data plane = forwarding plane. f5 data plane can be
        configured to process traffic including:

        L2 Frame forwarding =
        traditional and opaque bridging
        L3 Packet forwarding = routing, NAT, SNAT
        L4
        Connection management = packet-by-packet processing or dual stack
        connection processing
        L5 Session management = session offloading
        and multiplexing
        L6 Presentation management =
        encryption/decryption, serialization/deserialization, anything an
        iRule can do with scan commands
        L7 Application message management
        = message by message load balancing, message content manipulation,
        etc.

        The great sauce at f5 is the programmable data plane.
        It’s far more than a forwarding fabric or a regex packet engine (PE).

        Our control plane
        services are actually well defined too. We take pleasure in our
        interaction with dynamic routing, our highly functional SOD fail-over
        services, our device service group clustering, our message based
        provisioning services, and a great control plane integration between
        our local and global traffic mechanisms called iQuery. All are part
        of TMOS.

        Our management plane services, including our GUI
        client, our TMSH programmable shell, our SNMP agents, our iControl
        API services (which we have had for years), and many new cool API features we are releasing this year,
        all make us proud. Bring on the automation and devop crowds.

        You
        did comment on specific BIG-IP platforms having different
        functionality for management. That is true… On older
        platforms we had a dedicated microprocessor running its own Linux
        kernel called the SCP (system control processor). It had its
        own SSH stack and could be addresses for lights out management
        console access. In newer BIG-IP platforms we improved on the older
        SCP design with a new microprocessor we called the AOM (always on
        management). Appreciate that sometime this level of lights out
        management does not make sense for a specific BIG-IP platform, so we don’t include it. Ask your account engineer for the specifications of any BIG-IP platform you
        buy!

        I hope this helps and does not just add to confusion ..

        John

        • http://twitter.com/sjiveson What Lies Beneath

          Hey John. Thanks again for your time and effort. There’s not too much new here (for me) but I appreciate the clarifications and I have learnt something. Just to make the point to others regarding some vendors, I’m happy to see that F5 continue to hold a very mature stance where blogs, social media and their staff are concerned.

          I’ll update the post and diagram shortly to accomodate some of this. Unfortunately, I’m still left wanting in some areas but the list is much smaller now so we’re getting there. So, my last few queries;

          1) Is ePVA software or not? Is this the FastL4 feature in VE (software), done using FPGAs in hardware or both dependent on platform? (assuming the latest hardware/software)

          2) I’d still consider LTM and other modules as exposing specific TMM features. However, as management features such as the GUI are also affected when licensed perhaps TMOS modules would be a better term? Surely BIG-IP modules is misleading as we’re talking about exposing software features (even if hardware support is involved).

          I’d love to post about BIG-IQ, if there’s any help you can give me there, let me know.

          I’ll post an updated diagram regarding ‘planes’ soon, if there’s anything wrong with it please let me know.

          Thanks again, I’m sure a fair number of people have found this very useful.

          • John Gruber

            Some answers…

            ePVA performs ‘assisted’ mode connection offloading. That means the initial qualification and policy for the connection, like load balancing decision, is done by a software TMM, but then subsequent packets associated with the connection are fully handled in hardware.

            Using the term TMOS feature is absolutely better. There are TMOS features which involve non-TMM aspects as you noted. Again, BIG-IP is the marketing term for a TMOS feature bundle. The terminology for any product bundling is controlled by our product management and marketing departments. (my way of dodging the blame for bad terms ) They align with what a customer can order.

            The BIG-IQ family of management services are centralized and don’t carry data plane traffic. That’s what makes them different from BIG-IP. The BIG-IQ moniker is there to differentiate from the BIG-IP TMOS feature bundles as they are stand alone management products which interact with BIG-IPs and other systems through various interfaces. BIG-IQ supports interactions with management clients through its northbound APIs interfaces. BIG-IQ calls other management APIs (like AWS APIs) through its eastbound interfaces. BIG-IQ supports management of BIG-IP systems through its southbound interfaces. Beyond that we need to get into specifics about the various BIG-IQ module and their functions to detail the interactions.

            Let me look at your diagram a bit more and see if there is anything I can add.

            Thanks everyone!

          • http://twitter.com/sjiveson What Lies Beneath

            Thanks once again John. I hate to keep going in this format but can you clarify the difference between ePVA and PVA? I understand that ePVA isn’t all hardware as PVA seems to be but that sounds like a step back, not forwards. Does ePVA allow for hardware offload above L4 perhaps or have some other advantage(s) over PVA?