Firewalls: Expensive, Broken Routers

In a previous post on IPS, I made a fairly negative comment on the value that you get from enterprise firewalls in the modern environment. At the time, I said that I was just going leave that comment hanging and see what happened. Well, precisely no one challenged me on it, which means either everybody agrees with me, or that most folk were just hoping to avoid me getting on my soapbox for a good howl at the void. If it was number two, you’re out of luck – here I go.

A Brief and Largely Inaccurate History of Network Security

At the beginning of the modern era for all things networky, there was no such thing as “network security”. After all, the user base was so small and specialised that, really, it didn’t make sense to worry about security. Everything was open, and it was a time when everybody shared resources on mainframes as peacefully as hobbits making cider. Most networks were completely self-contained in a small area and consisted solely of LAN technology.

After a while, the utility of mass computing became apparent, and non-academic organisations started looking to harness the power of the network across longer distances. To support this, cheap connectivity was required that did not require significant investment and most organisations chose to use a WAN that was already in place: the PSTN. Since the PSTN was also available to ordinary people too, this created a security problem, since any wingnut with a modem, a microcomputer, and a decent knowledge of how to use them could connect to a system and abuse it. This still wasn’t a massive issue, and the problem could usually be solved by using obfuscation and basic security controls such as passwords.

As more businesses and organisations adopted WAN technology, they wanted to be able to connect networks together. At the same time, the US DoD wanted to be able to connect critical defence infrastructure together in a way that would be resilient to attack. Lo, the Internet was born.

The Golden Age of Firewalls

The move to a large scale WAN such as the Internet and the connection of separate administrative domains meant that networks had to be protected from each other. The concept of the firewall began as a 5-tuple used to make policy judgments on traffic passing through a network perimeter, with the criteria being:

  • Protocol
  • Source L3 address
  • Source L4 port
  • Destination L3 address
  • Destination L4 port

At first the standard technique was to use a blacklist approach, where “bad” traffic was denied and everything else allowed through. Pretty quickly, it became apparent that this method was asking for trouble, and so most people moved to a whitelist approach, where you define the traffic that you want to traverse the firewall and deny everything else. Modern firewalls apply this approach by default, with an implicit deny statement at the end of all rule bases.

It soon became apparent that statelessness – by which I mean that the firewall did not track outbound connections and automatically allow return traffic without explicit rules – was not very efficient. Now, firewalls will usually operate statefully, and we’re all much happier as a result. Or at least I and everyone I know who’s ever had to manage a firewall rule base is. Then, NAT came along and ruined everything.

NAT: An Ugly Hack for Ugly Times

I don’t intend to go into how NAT works, or why it was spewed into being. I assume that if you’re reading this, you have a pretty good handle on NAT. Suffice to say that NAT is responsible for 97% of all cases of leprosy in squirrels*. NAT was a hack to get around an ugly truth and designed to avoid us having to confront the fact that we’d built an Internet that was too small.

The problem with NAT is that it breaks things, and the main thing it breaks is the end-end rule. This is the concept that hosts at either end of a connection should be able to communicate directly without intervening devices changing port numbers or interfering with traffic. Breaking this rule causes all sorts of problems with checksums and other horrors.

The Modern Firewall

So here we are. At the present time, we have firewalls that break the end-end principle with NAT and filter traffic based on the original network security 5-tuple. Now, I don’t know how many of you actually read your logs (everyone, surely!) but a cursory glance at most rulebases will show that the traffic that is being allowed through a typical firewall falls into one of the following categories:

  • HTTP
  • SQL
  • HTTPS
  • IPSec
  • SSH

You may also see application traffic in there on other ports, and perimeter firewalls will be more tightly configured than internal ones, but the point is that most modern firewalls allow little traffic through. This was a good thing back in the old days because the typical attacker would be scanning for services to exploit. It therefore made sense to restrict inbound traffic to ports that you wanted to provide services on and absolutely _had_ to be exposed to the end user. That worked for a while, until the bad guys realised that, actually, all the interesting stuff is on the web server anyway, and – hey – it supports HTTPS, so I can tunnel attack traffic through without worrying about it being inspected. So, the attacker will drop some horrendous payload on your web server and pivot into the rest of your network using SQL injection or something similar. Game over – thanks for playing.

A fair proportion of traffic passing through firewalls is encrypted. This means that even if your firewall rules are tight and well documented, most modern attacks will still get through. This is exactly why companies such as Imperva and F5 have been successful in the application firewall space – modern firewalls are essentially a noise filter that stops cretins from attacking you. If the bad guy isn’t a cretin, then your firewall is nothing but a minor speed bump. Although firewalls are L3 devices, they typically have poor routing protocol support or incur a hit in terms of performance by enabling them, so they can’t even be used as expensive routers. I might as well chuck the firewalls in the bin and buy a bunch of high performance routers with a noise filter ACL applied.

Fixing the Firewall

So: your bog-standard enterprise firewall is an expensive, ineffective appliance that can’t even route properly. Now what?

Happily, firewall vendors realise that they risk irrelevance in the network security space unless they innovate, and several have jumped on the “Next Generation Firewall” (NGFW) bandwagon. Definitions differ, but broadly speaking, a NGFW essentially consists of a standard firewall with additional security controls layered on top, such as IPS, application awareness, and potentially content inspection and identity awareness. Depending on your firewall platform of choice, these features may be licensed extras and may have an egregious impact on your firewall’s performance. The vendor also gets to sting you for a signature update subscription, since IPS and content inspection typically rely on signatures. It can be difficult to persuade some security bods of the benefits of turning this kind of functionality on, especially if they have a “one function per appliance” fetish. Stay at it, the benefits of collapsing this functionality into the firewall will tell in the end.

NGFWs can be an appropriate way of getting further value out of existing investment in firewall platforms, but can be expensive to buy and implement. As an alternative, you can layer other defences in, such as IPS and application firewalls. If combined with a network visibility and aggregation layer, it’s possible to ensure that traffic is automatically redirected to the appropriate tool. You can also look at solutions like Host-based IPS on Internet facing servers to reduce exposure. Whatever else you do, ensure that all your security controls are logging to some form of log management system and proper analysis is being done on the logs.

Firewalls are the Network’s AV

And by that I mean: still necessary, but largely useless. It would be a brave company indeed that went completely without firewalls of any sort, but the value that we, as security professionals, get out of them is rapidly diminishing. As NGFWs and other platforms add capability, we’ll slowly move away from the firewall as a technology in its own right, but for now, we’re stuck with the stupid, broken things.

* This is categorically not true – but it should be.

Neil Anderson
Neil is a freelance network security architect and contractor working with a number of clients in Scotland and Europe. He is CCIE #18705 and also holds a CISSP. He can often be found sampling beer in remote locations and ranting about tech to anyone too stupid to run away. If you're very unlucky, he may talk to you in Gaelic. Neil can be occasionally be found on Twitter.
  • xeon852

    On the routing side, we run full DFZ ipv4+ipv6 tables from 5 different providers into a Fortinet 800C which has a list price of $10k. BGP convergence is on the order of 20 seconds for sucking in full ipv4 table. Theoretical throughput is max 20Gbps, we’ve done 4+Gbps without more than ~15% CPU utilization. I’ve yet to find a router that can do that for the same cost.

    So while as a generalization I agree with you, they can sometimes be useful in specific applications.

    • NeilTAnderson

      I’m a big fan of the Fortinet approach, given the price performance, it’s difficult to argue against them. My point is more around traditional firewalls such as Juniper SRX and Cisco ASA. Typically the NG functions haven’t been all that well integrated on those platforms.

      • http://twitter.com/cloudtoad Derick Winkworth

        The SRX has fantastic routing features, actually.

        • NeilTAnderson

          I’ve not seen very many deployments of SRX doing both firewall and routing, but even then, it’s not a NGFW.

          • cryptochrome

            Yes, it is. Ever since… Junos 11.2 I think. The SRX these days does the same stuff as a Palo Alto Networks firewall. And it adds a fantastic network OS on top of that (no one will argue that Juniper knows routing).

          • SRX-headache

            Did you ever deploy SRX? If you did, you’d know how buggy and terrible they are…

          • cryptochrome

            Quite a few actually. Feeling your pain, I know all about their shortcomings. And don’t get me started on NSM :-) Still, I think the SRX concept is awesome. If they can get rid of the bugs and all the quirks (I just found out they don’t log out of state sessions, jeez) it could be the best firewall.

          • NeilTAnderson

            I had a conversation with the Juniper guys recently about SRX, and I think you’re right. Once they’ve sorted out their problems, it’ll be a great firewall (once they work on their pricing ;)). I need to go away and have another look at them though, because I certainly didn’t pick up on any NGFW functionality.

          • SRX-headache

            I agree with you, but if you compare SRX with PAN you see where the problem is: GUI. I love JunoOS, the best CLI ever, but firewall management is (almost) all about GUI… And SRX’s GUI is unusable. It’s amazing that Juniper is still struggling with such simple things like IPSec VPN on SRX, so buggy.., that’s why when it come to a standard IPSec VPN people still want to stick with SSG (ScreenOS) even though Juniper wanted to kill this platform… Too bad for them, they should continue developing ScreenOS and/or not letting Nir Zuk to start PAN…

          • cryptochrome

            ScreenOS was rock-solid and awesome. As for SRX GUI aka NSM, I’ve done so much ranting about it in the Juniper forums and on my blog, that the Juniper head of product management eventually called me and asked me to stop :D

            But there is hope. NSM is old and dead. You should take a look at it’s successor, Junos Space. It’s very powerful, has a very modern HTML5 web GUI (with drag and drop, right-click context menus etc.) and is a very good replacement for NSM. With Space, we finally have a good GUI for SRX.

  • http://twitter.com/SDNgeek Brandon Mangold

    First, good general rant, cannot argue any principals of the points. I will say that NGFW in concept are going to help us along but here has been my dilemma. Given that firewalls always were supposed to be a supplement rather than the end-all be-all of network security, we have never used the firewall as more than a screen door in a defense-in-depth posture.

    My security architecture includes other services such as INBOUND: Akamai global web caching and security services, F5 at the edge for more powerful DDoS attack mitigation and F5 terminating inbound SSL and providing basic edge ASM firewall and WAF services. A dual NIC, untrusted|trusted server flow in 3 tier web-app-db hierarcy. Enterprise firewalls controlling general entry on trusted NICs from the external facing DMZs to the app/DB tiers w/ general 5 tuple screen door tactics. Fully tapped infrastructure & F5 mirrored ports feeding Sourcefire IDS/IPS providing visibility and a level of dynamic threat mitigation.

    OUTBOUND: No default-route, the only way for internal clients to communicate with outside hosts is proxy services w/ the ability for SSL inspection (man-in-the middle basically where clients must fully trust the cert from proxy servers). Proxy provide layer 7 deep inspection of outbound user and server to internet host traffic.

    GET TO THE POINT: While I am excited about the usefulness of being able to couple user based authentication (which we have the capability for today) with the ability to define granular access to applications beyond or even en lieu of 5 tuple rules. The monolithic capabilities of a NGFW to provide all of the services in one box as described above I am somewhat cautious of. I like having some of my unicorn eggs in different baskets.

    SUMMARY: Next gen firewalls are GREAT for people that have not already addressed the higher layer security concerns with security posture augmentation with other services and need/want to have a monolithic security policy enforcement point. Another way of saying this is that people who incorrectly treated the firewall as the end-all be-all of security, rather than a simple screen door, will love NGFWs b/c it gives them a lot of new cool stuff!

    For the rest of us NGFWs are another useful tool in our security architecture that really consolidates many features and services we currently leverage elsewhere.

    • NeilTAnderson

      You’re 100% correct that firewalls _shouldn’t_ be seen as the be-all and end-all of network security, but too often I’ve heard the comment “we’ve got a firewall, so we’re secure”. If you have a genuine defence-in-depth approach, I agree that NGFW will be of limited additional value – until it comes time to renew your estate. Personally, I’m not a subscriber to the “one function per device” dogma. I think there are opportunities for consolidation there that will end up improving security in the long run.

    • http://thegameiam.wordpress.com David

      No default route is really the key here, and one of the things which goes along with that is that once you’re proxying everything, the end-to-end rule is already gone. So at that point, why (other than state table problems) would NAT be a problem for an enterprise (other than from a religious point of view)?

      There are lots and lots of enterprises who find end-to-end to be about the most threatening idea they could imagine, and they are not without a valid concern.

      • NeilTAnderson

        There are two things here. Firstly, application layer proxies != NAT. In fact app proxies are the way that a lot of vendors _fix_ the NAT problem. Secondly, you are discounting the possibility of the kind of transparent content inspection that can be done using a L2 proxy.

        Regardless of whether end-end worries people, and I agree that there are valid concerns about it, it is _still_ the way the Internet was designed in both IPv4 and IPv6. Breaking the end-end principle still breaks applications and protocols now, after NAT has been around for a very long time. Working for enterprise clients, I consistently find that NAT is _still_ the aspect of firewalling that causes the most difficulty when implementing new services and making changes to an existing estate.

        • http://twitter.com/SDNgeek Brandon Mangold

          I got to agree with Neil here. A proxy for outbound and F5 for inbound create two separate, in theory ‘end-to-end’ tcp sessions. The proxy and F5 are both really fronting content on a more intelligent layer 7 basis. Since the sessions are actually terminated on these security devices they gain full visibility into the traffic traversing them (assuming no double tunneling or something) and can make intelligent security enforcement decisions since they are a middle man.

  • http://twitter.com/sanjuanswan Jerold Swan

    “I might as well chuck the firewalls in the bin and buy a bunch of high performance routers with a noise filter ACL applied.”

    I absolutely think this is what you should do for public-facing Internet services if you can. State-tracking firewalls are just a giant DDoS enabler in this role. Push the security intelligence into the host if possible. Use a comparatively cheap hardware-switching router and with stateless ACLs in hardware for the screen door. The best objection I’ve seen to this plan is that this class of router traditionally has crappy packet-logging capabilities, but you can often work around that with NetFlow.

    The only time I see an advantage to using a stateful firewall for Internet facing services is when the installation is so small that you’re using the same firewall in front of client services and public services, and you can’t afford to do it any other way.

    For client-to-Internet services, basic stateful firewalling with a malware-blocking proxy, and an aggressive network security monitoring posture (your “network visibility” layer–I like that term) goes a long way

  • Ryan Malayter

    Isn’t it time we realized there is no perimeter to the network anymore? Every host and app needs to be able to live on the big bad Internet by itself, because the interior of the network is potentially hostile and becoming more so.

    This means host-based firewalling, aggressive patching, aggressive code auditing, vulnerability scanning, and most importantly transport-mode IPsec for communication between trusted hosts. Surprisingly, MSFT makes this easy, and has since about 2003. It’s a huge pain on Linux.

    Anyone who thinks an NGFW or IPS makes their network secure is a dangerous idiot. Security needs to be distributed into every end host, and the network should be fast and dumb. Legacy apps get IPsec with strict filters, which is a whitelisting approach better than any IPS.

    • cryptochrome

      Not sure why you need to call people idiots. Having a firewall and IPS is better than having nothing. And good luck with the whitelisting approach. I wonder who’s going to manage that.

      • Ryan Malayter

        Yes, a firewall or IPS is better than *nothing*. But I stand by my characterization of those who believe such devices “equal security” as both dangerous and either unintelligent or willfully ignorant of mountains of contrary evidence. Firewalls and IPS don’t really stop anything today that decently managed hosts wouldn’t also stop themselves. They certainly don’t seem to be reducing the frequency of data breaches. Network-based security is as hopeless as antivirus.

        As for managing whitelisting of connectivity using transport-mode IPsec, that’s exactly what MSFT Group Policies are for, and it really isn’t that difficult. We’re already doing this for public-cloud based servers, and looking to push the same into our own DC. You could try to script such a system yourself on Linux with Puppet or Chef, but it isn’t fun. We still configure Linux IPsec by hand unfortunately.

        • cryptochrome

          What are you talking about? Are you seriously proposing that firewalls/IPS != security?

          And as for your statement re:idiots and re:unintelligent – that says a lot about you. We get it. You are the intelligent genius here.

          I am not going to argue with someone this aggressive and arrogant.

        • NeilTAnderson

          Ryan, I have difficulty believing that your model scales to any useful degree within an enterprise scale DC. How are you defending the hosts themselves? AV? HIPS? How do you know if two servers communicating via IPSec aren’t sending malware traffic to each other? How do you stop someone creating a botnet within your DC through a single “inverse bastion” host?

          • Ryan Malayter

            I’ll grant you that ours is not a particularly large environment: on the order of 100 hosts in two DCs plus the cloud. But I don’t think this is the limit of scale for the approach.

            We are defending the hosts with host-based firewalling, AV, and HIPS. Plus vulnerability scanning, aggressive patch management, strong ACLs, file signature monitoring, etc. All of these are centrally managed, mostly from a single tool.

            As for knowing that two servers aren’t sending malware traffic to each other or becoming an internal botnet: each host has its communications policies established at a central managment point (Active Directory in our case, but it could be vShield or puppet or whatever). DB servers, for instance, only accept server-to-serve traffic on one port, secured with IPsec from a list of hosts validated by certificate exchange. Hosts only allow management traffic from a similar restricted set of hosts, authenticated with IPsec certs (not by IP address). We of course do anomaly monitoring, but the toolset is currently poor (even worse than current horrifying morass of SEIM tools), and I won’t pretend it is anything close to perfect. Log and performance management sucks in general.

            Most DC networks are configured with just a few security control points anyway, otherwise it becomes totally unmanageable. There’s the crunchy exterior and a soft interior (or perhaps a few dozen segments). With a per-host approach, you simply *have* to use centralized policy tools to establish per-host policies that follow the host around. This is what should be done with physical firewalls anyway, but can’t be because of topology and management issues.

            As for the “inverse bastion” scenario, if I understand you correctly, I guess we’re about secured against that as traditional DC with a firewall and IPS on every link that connects to a host.

            My basic point is that network-based security is increasingly blind, as more and more protocols wrap them in TLS, SSH, or whatever. Everything is encrypted by default these days, file server traffic, Exchange traffic, RDP/SSH sessions, app-to-DB sessions, heck even our backup software wraps all comms in TLS. How could running all that encrypted traffic through a collection NGFW or IPS with SSL-intercept possibly scale or be even remotely manageable?

            These days, you really cannot trust that the next physical server down-rack (or even worse, another VM on the same host) hasn’t been compromised. Which means you deed to do enforcement at every host. Do you funnel *all* of your server-to-server communications through a firewall and IPS box?

            Look at it from another perspective: do you really think Google, Facebook or even MSFT is relying on network-based security, given they manage hundreds of thousands of hosts? Security enforcement simply has to be distributed to the endpoints at high scale. Those huge networks may not have to deal with the sheer number of random poorly supported crapplications that the typical enterprise does, but they certainly have at least as many internal services as an enterprise has applications, and therefore at least as much complexity from a security perspective.

    • NeilTAnderson

      The network absolutely _does_ have a perimeter these days, but it has to be set at a point where you have absolute control, such as the edge of the DC. The concept of devolving security controls down to the host level, with nothing at the network level, gives me the screaming heeby jeebies. With the proliferation of different endpoint OSes out there, I think that dropping security off the network would be a disaster. I think of the network as an abstraction layer that allows different types of hosts talk to each other reliably – the same concept applies to network security. If we can secure the communications channel, then we are some way to securing the host.

      As for aggressive patching and code auditing both of these are reactive (although the code audit is slightly more pro-active). Really what we need is to get away from signature based controls and get some proper intelligence into the security infrastructure.

      Ultimately we can never get to a totally secure IT infrastructure, but we can try not to make it easy for attackers, which is why a blend of solutions is a good idea.

      • Glen Turner

        The idea that you can even trust the datacentre is a step too far. The argument for virtualisation is that you can move all the odd servers in your network from real iron to software, saving money. The flip side means that all of your organisation’s dodgy servers have moved to within your datacentre. You might consider a university IT department centralising hosting or a programming shop centralising all those “temporary” development experiments as an extreme, but not an uncommon one.

        There are three arguments against host-based firewalls. (1) There’s no good tool (yet) to take a corporate firewall policy and apply it to all of the hosts, so that you get the same consistency as with a edge firewall. (2) You are exposed to a host firewall zero-day. (3) You still haven’t solved the problem we’re avoiding — authentication and authorisation.

        (1) isn’t too bad. Consider that if you firewall per VM host then the guest VMs are covered. As long as your site is small enough to only run a few different types of VM hosts then this is a tractable problem.

        (2) also applies to edge firewalls, and is really a plea for defence in depth.

        (3) also applies to edge firewalls. But by moving access control to the hosts it becomes even more apparent that we often misuse access control to solve a authentication and authorisation problem. Some of this is due to application programmers shifting costs to network engineers. Some of it is due to the lack of one clear choice for federated authentication and authorisation.

        From a practice point of view I’d suggest two things:

        (1) On-host VM networks are part of your network, and should be managed as such. They are a great pinch point for applying network policy, such as access control and other firewall functions. Their performance and resilience can also undermine the steps you’ve taken in your non-virtual network to gain performance and resilience (eg, buy running spanning tree naively or running with stupid AQM optimised for hosts rather than routers), so you really need to get a grip on these virtual networks.

        (2) Design the edge, don’t outsourcing it to a magic box. I’m always amazed by the number of networks which send DNS traffic through their general-purpose firewall rather than running a DNS forwarder as an application-specific firewall, with its significant security advantage of never passing an exterior packet to the interior, and not consuming connection slots on the general purpose firewall. Similarly, considering the placement of your video, varnish proxies and HPC frontends can have big playoffs in keeping the load on the firewall down. This of course means that IT security people need to get their hands dirty with the detail of the enterprise’s IT, and that seems to be a step too far for some corporate cultures.

        Corporate culture also rears it’s head with monitoring and intrusion detection. Every machine should run SNMP, as a good central historical repository of traffic and machine behaviour is a quick way to ask “is this machine behaving as expected or is our feeling that something odd is happening justified by the numbers”? As for that feeling of finding “something odd” that’s the key task of intrusion detection.

        • NeilTAnderson

          I agree with a good deal of what you’re saying here. I’m going to fall back on the Network Engineer’s mantra with regard to trust boundaries though – it depends. If you have good management in place, and defence in depth within your data centre then you can consider the DC trusted. It’s certainly far easier/safer to trust than the LAN. That’s not to say that you don’t have internal security controls, but there needs to be a level of pragmatism in any security architecture, and ultimately that will come down to a risk-based decision. At some point, you will end up trusting something or somebody, which is why we buy stuff from vendors rather than building them ourselves. Well: that, and it’s massively cheaper.

          It still amazes me that there is a tension between practical security pros and infosec policy types who like to remain in ivory towers dispensing their wisdom without ever having to _do_ anything, and I think that’s what you’re getting at with your comment about IT security folk getting their hands dirty. How we fix that is a difficult one, but it needs to be fixed before we can move on in enterprise environments.

  • http://www.packetu.com/ Paul Stewart

    Firewalls are a band-aid that *attempt* to cover the gaping holes in the applications that are network connected. Not that we completely eliminate them, but their usefullness continues to lessen. More and more traffic is encrypted end to end and thus can’t be inspected to any degree anyway. Network security should be about protecting the network itself (not the apps–which it can’t typically protect anyway) and potentially steering/scrubbing DDoS traffic based on requests from a host.

    • cryptochrome

      Disagree. Many good firewalls can decrypt SSL traffic (man in the middle), and yes, firewalls (the modern ones anyway) do understand and protect applications. Just remember that no network exists if there are no applications (an empty network would be kind of pointless). So the firewall always was and is about protecting applications. And in combination with IPS and WAPs, they do a pretty ok job doing so.

      • NeilTAnderson

        I’m really uncomfortable about MITM SSL decryption, not just from a legal perspective, but also because we’re using a technique to attack the protocol’s primary function. OK, in this case the MITM is authorised, but it’s still a successful attack against a protocol intended to increase security. I’d much rather see SSL terminated on a load-balancer or something before the traffic is inspected on its way to the application, in the clear.

        Firewalls intercepting SSL also doesn’t help secure connections that are running on odd ports (for example SSL over TCP/80) or other protocols such as SSH or IPSec.

        • cryptochrome

          See, that’s where NG firewalls come in. They are absolutely able to intercept connections on odd ports, because these firewalls are no longer “port based”. Take Palo Alto Networks firewalls for example. They inspect traffic on layer 7 and understand applications. No matter on what port you transport your SSL connection, that firewall will see and decrypt it. Oh and while we are at it, it can also decrypt SSH :)

          PAN firewall rulebases can actually be designed without even specifying any ports (although it is not recommended). A typical rule looks like this:

          source: 10.10.10.0/24
          destination: 192.168.0.0/24
          application: HTTPS
          port: any (!)
          action: allow

          and then you’d have a second rule for the decryption:

          source: 10.10.10.0/24
          destination: 192.168.0.0/24
          application: HTTPS

          action: decrypt

          the firewall looks at any traffic that passes through it, inspects the flow until it recognizes the application (no matter which port) and then takes action.

          the bigger question is whether SSL is still a viable option for encryption, given all the recent vulnerabilities and disasters that happened. but that’s another discussion entirely.

          • NeilTAnderson

            Exactly – we need to see more of this kind of functionality being added into modern security appliances, because the old way of looking at policy based on a 5-tuple is broken. Ultimately though, the old type of enterprise firewall, which I’m still seeing deployed in the enterprise, is too expensive for the functionality and security value to be worth the effort any more.

            “the bigger question is whether SSL is still a viable option for encryption,” Ah – the question no-one is daring to ask :)

            You’re absolutely correct – SSL is looking like it’s on a very shoogly peg, but do we have a practical alternative at this point?

          • cryptochrome

            Good question, to which I have no answer. I read a very interesting article about it in a very good german IT magazine (Heise iX), and they seemed to have some ideas, but unfortunately, I forgot :D

      • http://www.packetu.com/ Paul Stewart

        Okay, so I don’t disagree with many of your points. However, I think I can use most of your comments to reinforce my position. My position is that Firewalls are a band-aid that cover gaping holes. While I agree that many can decrypt SSL, decrypting every type of transit traffic is a challenge. My question to you is should firewalls ever understand what should be in the packets better than the firewall? The answer is absolutely not. However the fact of the matter is that application teams often don’t take this stuff seriously. Therefore we’re trying to address L5-7 stuff upstream, when the hosts should do their own validation.

        Today I use Network and Web Firewalls and will continue to do so. What I loath is having to have to get a new firewall for every new protocol and application (I’ve actually working on two cases of this right now). Additionally, I don’t want to be trapped into vendor x firewall because it is the only one that supports a certain application. The solution is to fix it at the host and application layer. I’m certainly not advocating the removal of network firewalls. I’m simply stating that there are limits to the applicability and we need to understand that. Greater capacity links and and wider adoption of encryption increases these challenges. My position is simply that firewalls are a band-aid and that better security comes from fixing the problems.

        • cryptochrome

          yes, firewalls have their limitations, I agree completely. and I also agree that problems should be fixed where they occur. you’ll always need a police force though (firewalls). I am not sure the firewalls is dead yet (as some writers of this blog want you to believe)

  • Aswath Mohan

    Security of the perimeter without hardening the inside of a network is something we have been speaking about for a long time. Seven years ago Mu Security was founded on this premise. Mu Security became re-branded as Mu Dynamics and is now a part of Spirent’s security offering. The problem is that hardening the apps and services inside is easier said than done. Placing a firewall on the perimeter is much easier to do. And which IT/Network security professional wants to be brave enough to invite trouble by saying my network did not even have the latest firewall technology when disaster strikes (as it surely will). Microsoft (yes, it remains unfashionable to praise them) took the right approach when they started promoting secure development lifecycle processes. Security starts from the cradle as it were and apps and services need to be hardened from the design stage. Fuzzing and code inspection are key elements to good secure development practice and Spirent offers some of these. We also test firewalls and show how broken most of these are if you would like.

    • cryptochrome

      thanks for the advertisement. we are all going to check out your product now :-/

  • Ryan Milton

    Well, NAT definitely is the problem, or a problem. Our network uses Juniper FW for IPSec mostly. Palo Alto FWs for the goblins.