Question: How much Security is enough?

Posted by simonw on Wed 7 Sep 2005 at 08:32

Tags:

To an extent the question is rhetorical, security depends on the risk, online banks needs more than sites offering archives of erotic stories for free, that is what security policies are all about.

What got me thinking was a comment on NANOG, that OpenBSD has x and y and z feature, and thus has more security features than the other BSDs.

I don't know the other BSDs so I can't comment if it is accurate, but I do know that most of the features mentioned (and some extras) are already in Redhat's latest releases of Fedora. Several are noticably absent from the stock Debian kernels.

The discussion was veering around routers, and embedded devices, but I guess there are minimum levels of security you need to establish to avoid being plagued by malware like a certain proprietary vendor's operating systems.

(BTW: The "blackhat" discussion of CISCO IOS mostly impressed on me what a good job CISCO had done in general, despite their somewhat hamfisted efforts to suppress the talk.)

Whilst I wouldn't want to have to try and secure Windows XP, in practice we have very few issues with security (one rooted and 'root kitted' Redhat box running a load of websites probably rooted before I joined, a couple of spyware trojans snook onto some Windows PCs, but nothing at all for about a year apart from a hosting customer who insist on leaving MS SQL listening to the world).

Do you guys harden your Debian boxes? Whilst Redhat seem to be walking the security walk, and Microsoft doing the security talk, is it important to you that Debian implement major new security features as opposed to other things the developers could be doing like better desktops. Or is it "good enough" for most practical purposes?

 

 


Posted by Steve (82.41.xx.xx) on Wed 7 Sep 2005 at 09:14
[ View Steve's Scratchpad | View Weblogs ]

When you talk about "security" there are a few different meanings:

  • Security of the system itself.
  • Security of the software installed upon the host.
  • Security of information.

Some of these can be addressed by the available "security frameworks", orthers by the actual setup of the machine (ie. forbidding anonymous accesses).

When it comes to security frameworks PaX appears to be pretty much discredited as I understand things, so the only remaining major security frameworks are:

  • GRSecurity
  • SELinux - used in Fedora, IIRC?

Russell Coker has been doing a lot of good work in the SELinux area, and a lot of progress has been made in this area - for example libselinux1 appears to be included in the base distribution nowadays.

However I must admit I've used neither setup, and don't really feel that I need to have a full-blown security framework in place upon my hosts.

I suspect that if you need a full setup you'll know already, although having these frameworks tested large-scale in distributions like Fedora is certainly a good thing.

For me security comes down to the basics, which apply to pretty much any operating system:

  • Strong passwords.
  • Avoiding untrusted users - if possible.
  • Minimal installed services upon hosts.
  • Adequate firewalling rules to avoid services from being overly exposed.
    • A firewall isn't a magical cureall, but especially on a LAN can do great things.
  • Fast intrusion detection via filesystem checksums, SNORT sensors, and application specific monitorying.
    • eg. mod_security for apache
  • Regular patching via distribution, or upstream, supplied updates.

Most of the currently exploited bugs can be kept track of via a subscription to Bugtraq, or similar mailing lists. (They seem to like unsubscribing my email addresses after a few months and it is never clear why ..)

I would like to see further improvements to Debian's security which is part of the reason I started auditing code, and working on an SSP compiler for Debian.

But honestly? Most system compromises could be prevented even without changes to the distribution itself. IMHO compromises usually result from one of two things:

  • Outdated and insecure software, often for which patches are already available.
  • Poor configuration upon the part of the installer.

Basic improvements to Debian's base install would be useful, such as reducing the amount of software (eg. no gcc on firewalls), or the suggestion of a firewall at installation time.

Other changes could be suggested such as "all installed services must be explicitly enabled" which would likely help, but many people would probably suggest these were more painful than helpful.

Steve
--

[ Parent | Reply to this comment ]

Posted by chris (84.48.xx.xx) on Wed 7 Sep 2005 at 09:26
[ View Weblogs ]
Now - there's something - how about an article on Snort? How best to configure it for a server, web-server and workstation (I'm guessing they're different).

I have run snort before - but that was when I was just starting with machines available via the net and I never really understood what I was supposed to be watching for :)

[ Parent | Reply to this comment ]

Posted by Steve (82.41.xx.xx) on Wed 7 Sep 2005 at 09:33
[ View Steve's Scratchpad | View Weblogs ]

I'll add it to the list ...!

With snort the basic setup is fairly simple, once you've decided how many "sensors" you want, and where you want to place them.

The real hard work comes in setting up appropriate rules.

If you're not careful you'll get too many false positives by having rules that will match incoming IIS exploit attempts - even though you're only using Apache for example.

As the rules are going to be mostly site and location specific that's something that would be problem even if I were to writeup something .. I guess.

Steve
--

[ Parent | Reply to this comment ]

Posted by Anonymous (84.45.xx.xx) on Wed 7 Sep 2005 at 20:21
We aren't doing any intrusion detection beyond tripwire currently, so I will take on board that as something we maybe should do more of. Although we do some adhoc network observations and monitoring, which sometimes shows up some curiosities, but nothing to worry about so far.

I think general system monitoring at work is less than ideal. But then we have a bespoke monitoring service, for all public facing, and some internal services. So we are good at spotting when things stop, but we could do better at monitoring content. Not that we don't do it, but it tends to be focused on specific issues, such as spotting people hosting illegal content.

I have some generic DNS checking, but the scripts needs a lot of maintenance work, as well as cleaning up the errors which will remain after the scripts catch up with recent DNS changes.

[ Parent | Reply to this comment ]

Posted by Anonymous (194.149.xx.xx) on Tue 21 Feb 2006 at 20:33
It used to be quite good and feature complete last time I needed it. The only problem is you have to learn the programming language and program acceptable runing patterns. http://freshmeat.net/projects/medusads9/

Not sure if it's not dead now or replaced.

[ Parent | Reply to this comment ]

Posted by Anonymous (80.126.xx.xx) on Wed 7 Sep 2005 at 20:22
While you are implementing the best security framework and stuff, don't make notes, don't share any info to your fellow assistant admins, and NEVER tell anyone else about the host.

Your security is as strong as the weakest link, and that is the human part.
Social engineers don't need to know a lot about the technical stuff, in the end your users will be giving them the info on the phone.

Read "The art of deception" and you'll know what I mean.

[ Parent | Reply to this comment ]

Posted by Eirik (129.177.xx.xx) on Thu 8 Sep 2005 at 14:26
Wrong conclusion!

(Incidentially I'm reading "the Art of Deception" now (about halfway through), and I'm not overly impressed. Too light on real details for my taste (makes for a little dull reading). Don't get me wrong, it's a great book for management, and admins that might not have tried getting into the hacker mindset much -- but I hope and believe there's little to shock experienced sysadmins in the book.)

Probably the most relevant part of the book is the last chapters, with recommendations on security policies, and staff training.

But not ever writing anyting down ? What ? Not sharing info with your fellow admins ? If anything policies should be formed along with the team. Everyone needs to know why and how. Then it needs to be documented.

But, you need to secure that information. Don't leave it up on your intranet. Either make a hardcopy manual, and lock it up, in a real safe, or share the information, but encrypted. Something as simple as using gpg and distributing it via email might be enough. But remember to watch out for temp-files and swap; make sure there's no way to avoid the encryption.

There's a reason all those three letter government institutions have a lot of reports with "TOP SECRET" watermarks. It's so that it's obvious to anyone handling the document that the information is sensitive.

You need to think about the lifecycle of sensistive information; it must be protected, it must be securly destroyed when it's out of date. In abscence of a real budget, burning documents in the backyard and stirring up the ash, works about as well as an industry grade shredder.

Effective security is about awareness of the sensitive nature of the infomation you know. Maybe some of it never should be written down (eg passwords), but the important part isn't wether it's written down or not, it's wether it's available to third parties, and that you're aware if and how a third party might get to know the information.

Security through obsucrity, isn't. Remember that a lot of information regarding the structure of you network can be learned, and should be easy to learn, from eg dns names. After all, you want your network to be accesible to your own users.

The message in the Art of Deception, is that awarness and education is an important security tool. Not that giving people the information they need to get the their job done is wrong, nor that all sensitive data should be memorized because you can't trust any safes of computer systems.

[ Parent | Reply to this comment ]

Posted by Anonymous (82.119.xx.xx) on Thu 8 Sep 2005 at 15:15
I also think it's a bad idea not to share with other admins. This was one of reasons many routers in Slovak Telecom were hacked few months (!) without anyone noticing... Too many admins and no coordination.
When you don't know how things SHOULD BE, how you can see when they are DIFFERENT? How do you know it's hacker and not one of admins?

[ Parent | Reply to this comment ]

Posted by gonad (219.89.xx.xx) on Thu 8 Sep 2005 at 09:33
Always update your system when security updates are available... ALWAYS.

I had a machine running Debian it was acting as an everything server (firewall,router,mail+web server) and one of the things I was running was Drupal. This machine was firewalled (iptables) to buggery, but that didn't stop the machine getting _0wn3d_.

Why? Not too long after Sarge came out there were a few security updates that I didn't respond to immediately - one for sudo (local exploit) and one for drupal (remote exploit). I didn't apply the updates when they were available and then it slipped my mind for a few days.

Too late.

I log in and find syslogd running with odd options... it wasn't like that the night before, WTF? I got to /var/log to take a look, files are gone... WTF? I install chkrootkit and run it - Oh, shite, Own3d.

I shut the machine down and pull the hard disk out and start reinstalling Debian on another hard drive, leaving the comprimised hard drive as it was (and on a desk, not in a PC).

What happenned? Some script kiddy had something that was attacking drupal, exploiting it's remote exploit and if it was successful it would then attempt the sudo exploit and bingo - got r00t? I also imagine the removal of /var/log/* and syslogd funkiness was scripted aswell - I don't think anyone would bother wasting there time on something that had my random ramblings on it.

What can be learned? Four things:
- always apply security updates
- firewalls are most definately not a cure-all
- laziness creates more work in the end
- always apply security updates

[ Parent | Reply to this comment ]

Posted by wouter (195.162.xx.xx) on Fri 9 Sep 2005 at 20:56

While you are right in that different sites need different levels of security, this also has been proven wrong up to a point.

No server, no matter how secure, can protect against a DoS from a sufficient number of dummy clients. Virusses and worms will eat your bandwidth and fill your mailbox. Aunt Tilly's supermarket windows 98 desktop with kitten wallpaper and mouse cursors can down your gateway... On the internet, no server is an island.

I remember a time where you could pick any large pr0n network and it was filled with pre-installed machines with all services open and never logged into, except perhaps ftp. That is a risk to you too, because a possible blackhat has all the time in the world to scan your machines and cover his tracks from those insecure networks.

These days, you -- sadly enough -- have to be paranoid about anything and everything.


Some things I think are important:

Any service you run can compromise your machines. If you need some ftp on a web server, think about putting it in a web directory and get rid of the ftp server. Run rsync over ssh instead of having a daemon listening. Most likely you don't even need a firewall on a single-homed host, if you only run ssh, smtp and apache; unless you only log in from fixed ips, you probably can't filter ssh without proper scripting, and it's unlikely you'd want to firewall smtp or apache beyond their own application level options.

Think about following the shortest code path possible and throw out any options or modules in daemons that you won't use.

Use ssh and scp. Learn about OTP and keys.

Remember 'netstat -natup'.

Never set up POP, IMAP or other protocols that use plain text authentication with real system users (who can log in).

Install your servers yourself and try very hard not to give anybody else root access. Then you know what you did and didn't do when things change in mysterious ways, and for instance the webdesigner guys can't accidently set up an open proxy (if you want to have thousands of connections per minute, give it a try).

All cgi, php, zope, ... code can be (will be) insecure and you probably should read through it yourself (if you didn't develop it yourself). If you have the time... and you're payed enough...

Don't install sniffers unless you need them, and if you do, remove them after using them. Don't make things too easy.

Install a integrity checker like aide, tripwire, samhain, etc. IMHO, sometimes it's more trouble than it's worth -- especially if you often update your software and change your configuration -- but they run very nicely on machines that aren't touched very often.

Use indirect protocols where possible. If some pointy-haired sales person or manager wants to access a large database with sensitive information from a public airport machine or their shiny new spyware infested laptop, simply don't allow them. Tell them to email simply commands to a robot that parses them and sends back the result of a self-generated sql query; or write them a simple interface that does this. Think about adding PGP to this all. Initially, all this means more work for you, but nobody connects directly to the database from outside and it's practically impossible for malicious input to reach any sensitive data. (It's also difficult to sack you because you're the only one who understands the whole system. ;) )

From here on, you can start playing with front-end and back-end servers and more complex setups, but you probably wouldn't be reading this then. :)


I probably forgot some things, but it's way long enough by now and I'm hungry.

[ Parent | Reply to this comment ]

Posted by Anonymous (84.45.xx.xx) on Fri 9 Sep 2005 at 23:16
"Think about following the shortest code path possible and throw out any options or modules in daemons that you won't use."

Despite a fair background in IT and system admin, the modules Apache runs by default leave me lost. It is just difficult to get the experience when you are using just a small fraction of what a tool can do, to know how much of the "whole" tool is required. Chopping it out till something obviously breaks is okay upto a point, till you discover it is broke but not so obviously broke a few weeks later.

The passwords comment was interesting and got me thinking. I've been migrating to using keys in SSH, but I still have passwords allowed on most boxes because of the sheer convenience factor. I'm sure a better plan and some automation in the public key distribution would get rid of the need for passwords.

I agree on the "basic level of security". You need a pretty sophisticated level of security to provide almost any non-trivial Internet facing service. Okay you can often do that by good software choice, and attention to detail, but attention to details is hard to maintain all one's working life.

15 years as root - you learn just not to make typing mistakes into commands and script with root privileges ;)

[ Parent | Reply to this comment ]

Posted by wouter (195.162.xx.xx) on Sun 11 Sep 2005 at 02:44

I never had anything bad happening as root, but I once deleted a large part of my home (on my own desktop). It just becomes routine to always use rm -rf, and I accidently put a space somewhere in the command. That hurts, because of all things, it's the one thing that cannot be recovered and backups are consistently outdated. And the most painful thing is that you almost always realise there's something wrong exactly on the moment that 'enter' button goes down...

[ Parent | Reply to this comment ]

Sign In

Username:

Password:

[Register|Advanced]

 

Flattr

 

Current Poll

What do you use for configuration management?








( 230 votes ~ 0 comments )

 

 

Related Links