More Thoughts on Hardening Internet-Facing Applications

In an earlier post, I presented some general comments on approaching security after listening to show 56 earlier this year. Now, here are my thoughts on some of the technical strategies discussed on the second part of the show (published as show 61).

When designing network security for external perimeters, I like to think of two key security principles:

  1. Protection layers serve two functions: they reduce the surface of exposure, but they also allow you to trade access for time. Every second that your attacker has to jump through one more hoop is another opportunity to detect and mitigate. This is just like those fighting a real war put up defenses to slow down enemy attacks; the same principle applies here.
  2. Every element in the architecture should help the overall monitoring system: every router syslog message, every firewall rule change or connection attempt, every SSL negotiation failure warning can help.

While I understand and share Mrs.Y’s frustation with L2/L3 stuff, it still plays a huge part in perimeter design. See point 1 above…
With that in mind, here’s what would I add to the comments made on the show:

  • Border routers. Loved the advice, but one thing I would add is that the border router can become an early warning system for a broader attack. For me, the main task of securing the border router is to have it cut all the usual crap – RFC1918, bogons, own addresses, etc…, protect itself from attack and then configure the inner firewalls to alarm like crazy if they detect anything – and I mean anything – unusual from the routers themselves. No, the router should not be pinging the LAN range or starting SSH sessions…

This way, if your router gets compromised and starts misbehaving, you get to hear about it. As a side note, also configure your peer router to alert if its sibling starts doing things that are not expected.

  • Load balancers. The show did not mention load balancers as a useful security mechanism, but there’s plenty of security goodness there.
    • First of all, load balancers can carry out some level of application-aware firewalling, such as blocking unauthorized HTTP methods, unusually long transactions and things matching on regexes. If all your pages/apps are under /ourapps, you can block other URL paths…
    • Load balancers can also be a useful place for terminating your SSL/TLS tunnels. This makes security inspection that much easier on the backend. This assumes, of course, that any compliance mandates your system falls under as well as your friendly auditor/assessor/QSA/… allow this type of architecture.
    • As a bonus, your load balancers can offer an ‘easy’ method to support IPv6 clients, assuming you need something like this.
  • Firewall layers. Depending on your needs, a dual layer firewall architecture can be beneficial. First, the old dual-layer, dual-vendor strategy offers a modicum of protection against software-specific vulnerabilities, albeit at the cost of more complex management.
  • Use multiple NICs on the servers. To me, the benefit of web servers with multiple NICs and separate connections to internal resources and the external public are real: no need to listen for traffic that is purely internal on external interfaces and vice-versa.

The analogy I like to use is a bank branch. You work with a teller that offers limited services to you and any time the teller needs something – perhaps change, more paper currency or approval – they don’t go line up in front of another teller.

  • Block outbound connections from your web servers! People seem to forget that the original idea/tradeoffs behind the DMZ is that it allows you to be more deterministic about what kind of traffic is allowed there. As a general rule, your web server does NOT need to make outbound connections. The obvious exceptions are for things like backup clients, database connections, etc… but these are all made to internal addresses anyway. Outbound connections to external addresses should be very rare.
  • IPS. One key fact about IPS architecture was not mentioned: relying on TCP RST to break down a connection means trusting that the endpoints who receive those resets will honor them… If you want to be sure a connection is being dropped, you MUST put your enforcement device in-line with the traffic.

Finally, as a side note, it was nice to hear Check Point get some love for a change… :-) Sure, it is different than your typical ASA/SRX/Netscreen/… and it does have its share of issues, but there are good things about it too, just like most other products out there.
Happy [secure] packet pushing in 2012!

fmontenegro
Fernando Montenegro is a security technologist actively morphing into a security/network guy. Following a long stint doing external professional services at a large IT company, he's now navigating the pre-sales waters at a networking vendor. He's an aspiring blogger at PacketPushers and elsewhere (http://netsecramblings.blogspot.com/). He's also active on LinkedIn, Twitter, Google+ and Facebook. Originally from Brazil, he now calls Canada home.
fmontenegro

Latest posts by fmontenegro (see all)

  • Craig Askings

    I also like using the border routers to blackhole ip addresses of very aggressive attackers. I’ve built automated systems that would pick up really obvious sip scans against public facing servers (1-2 mbps of sip registrations from one ip) and blackhole them to remove that load and noise from logs.

    • Fernando Montenegro

      Yes, I like that, especially because it shows another feature that I neglected to mention – building the infrastructure to your needs, not to a “best practice” (a term I dislike). We could debate whether this filtering would be better suited on the router or on the ‘firewall’ – pros & cons for both – but that is almost a matter of personal/organizational preference. 
      Thanks for the comment!