check_by_ssh question

Andreas Ericsson ae at op5.se
Fri Mar 26 13:44:02 CET 2004


Paul L. Allen wrote:
> Andreas Ericsson writes:
> 
> I haven't seen anything that doesn't have local exploits. 
I have. It's called Owl, and you can find it at www.openwall.com/Owl

> Most sensible
> people apply patches as soon as they come out.  But if you're foolish
> and have a monitored machine that has local exploits then it is very
> likely your monitoring machine also has the same exploits.  Therefore
> you have more important things to fix before you worry about nagios
> being used as an exploit.  And if you fix the important things first,
> the nagios exploit then can do little damage.
> 
And most patches come out because a problem has been detected. Being 
secure afterwards is every bit as stupid as feeling pleased your 
anti-virus program removed some viruses (when they never should have 
gotten in in the first place).

> Your proposed nagios exploit relies upon far more serious exploits in
> order to be able to use it in the first place.
Not really, no. A local user can probably find something to elevate his 
privileges with. A couple of months ago, a remote exploit was found for 
apache 1.3.28, and today there was a new issue with 2.0.48 (3 different, 
actually), which could allow remote users to gain shell access.

> Your attitude is "Somebody could use an exploit to get root on your
> monitoring machine, get shell as nagios on the monitored machine and
> then use the same exploit to get root there, so don't use keys with
> nagios." 
Not at all. Now you're being downright silly. What I meant was that IF 
your nagios server gets compromised, the problem _WILL_ spread to every 
machine you're monitoring with check_by_ssh (unless the attacker is a 
real git, but that's not to hope for).

>  My attitude is "fix exploits as soon as you learn of them."
My attitude is "audit the source-code and fix bugs before you implement 
the program". There's a world of difference.

> 
> There will always be local privilege escalation problems. 
Not necessarily (unless physical access applies).

> All you can
> do is fix them as soon as possible.  The point is that if you have a
> local exploit that you leave unfixed then you have far more to worry
> about than a nagios exploit.
> 
Indeed, but that doesn't excuse making it easier for an attacker.

> NONE of our machines that perform network services
> have local users who do not already know the root password because the
> only users on those machines are those tasked with administering them.
Great. I guess that rules out remote exploits to get a low privileged 
shell which can later be escalated to a root shell, so you *MUST* be 
safe using this "security" model.

> Those people may choose bad pasawords 
Not necessarily. Ever heard of password policy enforcing? Try Owl and 
you'll get a sense for it.

> but we make damned sure root has
> a strong password and administrators can only get onto the machine locally
> or by ssh 
Or by exploiting a remote hole in apache or ssh (remember 'The Matrix 
Bug' ?)

> So if an attacker gets root on our monitoring machine it has to
> be through a remote exploit 
... to get ANY shell (first)

> and therefore all our monitored machines are
> likely to have the same vulnerability so the nagios exploit is not needed.
So that's an excuse to lead him by the nose to the vulnerable servers? 
You DO have a funny way of thinking. And besides, do you honestly run 
the same type of services on your backup server as you do on your 
cvs/ftp/web/dns/radius/ldap/samba/whatever server? I think not.

> There have to be compromises between security and utility.  I can make
> any machine 100% safe from network attacks by unplugging it from the
> network. 
> But I'd be interested in seeing your kernel patch to allow
> the check_raid plugin function without sudo.
I can imagine this script / program working in two ways which won't 
require sudo to make it run;
1. The plugin reads /proc/mdstat, which is allowed by default and 
therefore not a real problem. There's a kernel patch to restrict access 
to /proc, but that's trivially set up not to matter.
2. The plugin reads a random piece of data from each of the disks in the 
array (assuming mirroring mode here). This only requires read-access to 
the devices, and the offsets can be hardcoded and the plugin setgid 
instead of using sudo. The plugin should ofcourse only be writable by 
root, so the attacker would have to overflow it to overwrite the 
offset-address (hardcoded, remember?) which is in the TEXT segment of 
the executable in memory, so it's not even possible (segmentation 
violation when writing to read-only memory).
Considering the attacker got past all that, he'd still only have 
read-access to the physical devices, which means he'd be playing 
'guess-and-win' at grabbing the password file (or files, if you're using 
TCB), and then run john the ripper (or something) on it to extract an 
actual password.

> 
> The only way somebody can do that to us is if they use a remote exploit to
> get into the monitoring machine.  If they can do that then they can use
> the SAME remote exploit to get into the monitored machines. 
Discussed above. Iteration seems to have become an issue here.

> If they can
> somehow get shell as nagios on the monitoring machine but cannot use that
> with a local exploit to get root then they can't do much on the monitored
> machines either. 
Except ofcourse browse the remote machines by using the authorized_keys 
method described earlier.
> 
>> With shadow passwords there are a bunch of setsuid programs on the 
>> machine, some of which may allow local privilege escalation.
> 
> There are always local exploits.  Which is one reason why none of our
> network machines have ordinary users who do not also know the root
> password.  We don't mix the concepts of "server" and "user machine" and
> I wouldn't trust anyone who did.
The setsuid binaries needed for the shadow password scheme is why you 
should change to TCB. Then you'd need an overflow to get to read other 
users password files but you still wouldn't have a root account. 
Combined with the password policy enforcement it is not computationally 
feasible to even try to attack a host with those rules in place.

>> Also, any admin on customer-network1 could submit passive check 
>> results for customer-network2.
> 
> I already mentioned that.  First, of course, they have to know that the
> other customer exists and what we've called their servers in Nagios.
> 
So this would be the first line of exploitation. If the nsca daemon has 
any bugs, then anybody hacking any of your customers would have 
something to work with. I'm not saying they'll have any luck, just that 
they know where to start (and that I for one would try for the fun of it).

> OK, so somebody at customer1 gets the root password for their local
> nagios machine sending passive service checks to us.  That means they've
> hijacked a machine we supply and control and can do more damage than
> submitting fake checks.  Then they somehow guess what other customers
> we monitor and what we've called their machines. 
Not necessarily. Generating looooong strings containing arbitrary 
garbage isn't really all that hard, but I guess you know your way around 
the code in libmcrypt and nsca, so you're already certain there aren't 
any problems there.

> If they do it from a machine that's
> traceable, the Computer Misuse Act means up to five years in prison. 
Give me a break. Anybody with a minimum of programming skill can write a 
program to relay tcp traffic through more hops than than you have hairs 
on your head. I've written one myself and it took me about 45 minutes.
Laws are good sometimes, but only if it's within the jurisdiction of 
whatever bureau you have enforcing it. Considering the fact that 
tracability is an issue, I could sit next to you and have it appear as 
though I'm connecting from Taiwan.

>> Unless we start thinking about buffer overflows, integer overflows, 
> 
> I suppose you've gone through the source of the entire kernel and every
> utility and application on your machines. 
Actually, I'm an Owl developer which means that's exactly what I do.

> I expect you've paid particular
> attention to apache, bind, postfix/sendmail/qmail, sshd, etc. 
Naturally. Postfix and sshd have been patched rather extensively, and 
the patches have been submitted to the main branch. Apache is not quite 
done yet, so atm I'm running a not fully audited version. We have, 
however, patched glibc to make exploitation harder.

> More
> realistically,
...

> such bugs will always be with us and all we can do is
> hope that if somebody finds a remote exploit they use it against somebody
> else first and a patch appears that we can apply before they get around to
> using it against us.
> It isn't feasible to check the sources of everything
> that is installed on a server. 
No, but it's every admins responsibility to check those that are setuid 
and those that do networking in an 'out-of-jail' fashion. Also, 
naturally, the libraries to which any program might be linked.

-- 
Sourcerer / Andreas Ericsson
OP5 AB
+46 (0)733 709032
andreas.ericsson at op5.se


-------------------------------------------------------
This SF.Net email is sponsored by: IBM Linux Tutorials
Free Linux tutorial presented by Daniel Robbins, President and CEO of
GenToo technologies. Learn everything from fundamentals to system
administration.http://ads.osdn.com/?ad_id=1470&alloc_id=3638&op=click
_______________________________________________
Nagios-users mailing list
Nagios-users at lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nagios-users
::: Please include Nagios version, plugin version (-v) and OS when reporting any issue. 
::: Messages without supporting info will risk being sent to /dev/null





More information about the Users mailing list