Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why should I firewall servers? (serverfault.com)
47 points by mcobrien on Nov 13, 2010 | hide | past | favorite | 15 comments


He's right; if you only have a single host, and you're very confident about the host-based tools you're using to restrict access (note: you're much better off with netfilter or pf than you are with TCP Wrappers), there's little to be gained from deploying an additional machine just to do access control.

But, once you have two machines, it makes far better sense to centralize access control in a single place. With two machines, it's simply good engineering. As your stable of machines grows, it's less a style point and more a necessity. It's hard to keep a group of machines in sync with a single policy, and virtually impossible to avoid mistakes and windows of vulnerability over the long run.

An attacker that truly wants to target your network can continuously profile your network, and is more likely to spot your mistakes than you are. A sensible way to approach network security is to assume that there's always someone who's willing put forth the effort to break in.


For those who make the effort, configuration management and monitoring systems that continuously profile one's own network are another way to deal with a growing, unfirewalled cluster. And they avoid having to learn yet another vendor's CLI or slow down iterations in resistance to changes on the one-off system.


As someone who sells exactly such a system, let me advise everyone not to use them as a replacement for a single, central, simple, coherent firewall.


In any case, folks should be continuously profiling their networks integrated into monitoring/alerts. I find many firewalls that are doing different things than my clients expect they are -- or have just been left to diverge from reality in some form of config rot. Automated testing surfaces these differences.

Can you really put "simple" in the same sentence as firewall? Vendors prevent that. Folks don't have time to maintain them.


The way I think of it is a firewall shouldn't be needed, but the cost of adding and running one is pretty small. Besides, saying "just don't make mistakes and you won't need one" is useless- everyone makes mistakes, and that's why we use firewalls.


Exactly. By using iptables on his host, as he describes, he is in effect firewalling it. But if you have 25 hosts you probably would want a firewall in front of them.


And if you have 250 hosts, you want to use a firewall between groups of them so the network is compartmentalized: so if one host is hacked it will not be able to take down all the other hosts.

When a server is compromized then iptables on it won't help anymore, not even to detect that the server is compromized! Only a firewall running on a seperate piece of hardware helps.

A system should never be designed with the assumption that it will never be hacked: that is like planning for a life where you never get sick.


Anybody who has worked in a large organization, or heck, any company with more than 30-40 people in the systems (DB/Servers/Network/Syseng) infrastructure group also knows that firewalls have the advantage of "Defense in Depth." If a sysadmin runs a quick pfctl -F a while troubleshooting a problem, and neglects to restore the ruleset, the firewall team has them covered. And the firewall team will never, ever run pfctl -F a. They likely will require multiple-day advance notice to even add a new, very specific rule.

Also - having policy for the network guaranteed at a single chokepoint (Usually a ruleset that generates firewall configuration, that is then pushed onto hundreds of firewalls) is a big win. One spot to audit.

With all that said - if you are a tiny 2-3 person shop, you can probably get by without Load Balancers, Firewalls, or heck, most infrastructure out there. Just throw it all on AWS/slicehost/linode and harden your hosts to do the right thing.

But, when you get big, and have hundreds (thousands?) of hosts, and are tempted to run them yourself, you will have firewalls, and loadbalancers. Many of them, in fact.

Check out Margrave (http://www.cs.brown.edu/~sk/Publications/Papers/Published/nb... ) for some of the interesting stuff around formalizing policy inspection.


Firewalls can protect you from a number of things because they deal with the entire stack at once, which tcpwrappers can't do.

They can detect and take action against denial of service attacks, port scans and probes for known vulnerabilities. They can manage incoming connections statefully.

They can block outgoing connections (big one there), including replies from the stack that give away information you don't want to give away.

They can also log what's going on.

Really, the ideal system only responds to the one port you're serving (and should stop that if you DOS or probe the system).


This advice seems to me to be misguided.

Suppose you have a standard database backed website. With his procedure the database would wind up on the internet, exploitable by anyone who knows of a security flaw in it. Under standard operating procedures you'd have a firewall which the web servers are accessible to, and a second firewall that allows the web servers to connect to the database but for nobody else to. Now if someone on the internet knows of a problem in your database software, it is not easily exploitable.


No. With his procedure the database would be properly configured to listen only connections from local network or localhost. So it wouldn't really be any more vulnerable than if there was a firewall protecting it.

His point in my understanding is that you should harden your servers, firewalled or not, and to hardened hosts firewall doesn't add a lot of value anymore.


In an ideal world where your OS has no vulnerabilities, your application stack has no vulnerabilities and you never make configuration errors I would agree.

However, as far as computer security goes it is very far from an ideal world.

See ghshephard's comments about "Defense in Depth" above.


I didn't see any mention of a DMZ. What percentage of his company's servers are hardened in the way he describes? What about his users' desktops? There's got to be some point at which one declares that the "administrator" of a computer (an individual user with a laptop, even) must be protected from himself.

What I liked about his argument was that he was actively thinking about threats instead of applying a heuristic of "it's ok -- we have a firewall." With all this said (and asked), I'm no security expert.


Am I missing something? If you have a server open to the internet, like a database server, it means it's open to brute force attacks. And who wants that?


If you are running database locally, then bind the daemon to localhost only, otherwise bind it to a non routable private address. Problem solved.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: