check_by_ssh question

Andreas Ericsson ae at op5.se
Sat Mar 27 13:21:42 CET 2004


Paul L. Allen wrote:
>>> Andreas Ericsson writes:
>>> I haven't seen anything that doesn't have local exploits.
>>
>> I have. It's called Owl, and you can find it at www.openwall.com/Owl
> 
> You can add TeX to that list.  The number of commonly-used server
> applications that have never had a local or remote exploit is zero.
>
Wrong. I've written plenty of programs that are unexploitable. Small 
ones, I'll give you that, but programming ideology scales infinitely.

>> And most patches come out because a problem has been detected.
> 
> Which is why I said all you can hope is that you're not one of the
> unlucky ones that gets hit before there's a patch. 
A downright lie. I've patched ntp, postfix and vsftpd BEFORE running 
them, which means I proactively blocked one or more exploits from ever 
happening.

> You can vet the
> sources of critical stuff, but even then you're likely to miss
> something.  You can detect obvious stuff with automated tools, you
> can detect less obvious stuff with human inspection, but you can't
> detect stuff that you never imagined could be exploited that way.

This is where it becomes obvious to me that you're not a programmer (at 
least not a very good one).

>>> Your proposed nagios exploit relies upon far more serious exploits in
>>> order to be able to use it in the first place.
>>
>> Not really, no. A local user can probably find something to elevate 
>> his privileges with.
> 
> We don't have any local users on our servers who don't already know the
> root password.  I wouldn't dream of having a server that is also a
> general box for ordinary users to play on.  Even apart from the security
> issues, I don't want an ordinary user hogging the CPU or doing something
> else that degrades the service.
> 
Couple this with any remote exploit to get a shell as the 'nobody' (or 
'anybody', really) user, and voilá. All the sudden there's a user on the 
system that doesn't know the root password. End of this part of 
discussion, ok?

>> A couple of months ago, a remote exploit was found for apache 1.3.28, 
>> and today there was a new issue with 2.0.48 (3 different, actually), 
>> which could allow remote users to gain shell access.
> 
> There are always remote exploits. 
Not really, no. The proper way to run networking services that need 
root-access to bind ports, devices, etc. etc is to have them behave as 
follows;
Read configuration, bind() the port, chroot to /var/empty (or some other 
nice place), drop privileges, start accept()'ing on the port.
This needs some tweaking to work with apache, true, but it lessens the 
impact of remote exploits tremendously (since the user will find himself 
jailed in a chroot environment as a user that lacks the privileges to 
escape the chroot jail). The need for ANY applications in the chroot 
environment is absolutely zero, which means that our Mr. Evil Hacker has to;
a) Find a bug which allows remote arbitrary code execution.
b) Craft a program to exploit it.
c) Figure out how to get networking AND shell functionality into the 
space that's left on the stack.

The 'a' and 'b' things happen all the time. I've never seen anybody pull 
of 'c', and I know for a fact that it isn't possible.

Bugs in programs that behave like this may result in a DoS, but never in 
unauthorized access.

> What I meant was that IF 
>> your nagios server gets compromised, the problem _WILL_ spread to 
>> every machine you're monitoring with check_by_ssh (unless the attacker 
>> is a real git, but that's not to hope for).
> 
> You're right that the monitored servers *could* be compromised that
> way.  But with no "ordinary" local users then the monitoring server
> could only be hacked using a remote exploit which is very likely to
> be usable on all the monitored servers too.
However, the monitored servers doesn't contain a perfect map of the 
network that the nagios server supplies. Also, the 'other' servers don't 
have the key required to do it all without hoping for the exploitable 
network services to be running.

>>> Those people may choose bad pasawords
>>
>> Not necessarily. Ever heard of password policy enforcing?
> 
> 
> Yeah, VMS had it back in the 80s.
> 
And the PAM-modules that comes with Owl does it properly as well, 
preventing users AND root to set weak passwords.

> I remember slapper as well.  But again, if either ssh or apache has a new
> exploit and we're unlucky enough to be amongst the first hit then they
> don't need the nagios exploit.
> 
--[ snip ]--
> How can I stop him knowing what the vulnerable servers are when all
> he needs to do once he has root on the monitoring box is to scan through
> the nagios configs?  Most of the monitored servers run the same mix of
> stuff as the monitoring server so if he can get onto the monitoring
> box he can get onto most of the others the same way.
> 
This is where logic goes out the window. In less than ten lines of 
shell-script, I could automate setting up perfectly invisible backdoors 
on every last one of your monitored servers. In the 'exploit' scenario 
I'd most likely have to do 'manual' labour for each server I wanted to 
backdoor.

>> The setsuid binaries needed for the shadow password scheme is why you 
>> should change to TCB. Then you'd need an overflow to get to read other 
>> users password files but you still wouldn't have a root account. 
>> Combined with the password policy enforcement it is not 
>> computationally feasible to even try to attack a host with those rules 
>> in place.
> 
> 
> But that doesn't protect you from all the other possible local exploits.
> Far safer not to have ordinary users who don't know the root password on
> servers.  In some ways, safer still not to have any local users because
> the only reason for connecting to those boxes in the first place is to
> do something as root.
> 
And run network services (which can be exploited) as pseudo-users. Those 
  pseudo-users CAN get shell-accounts if the network services are exploited.
Solution; Remove setuid/setgid bits on ALL programs on the server, and 
have everybody do everything as root. Without the setuid programs, local 
non-root users can only spawn shells as themselves when exploiting local 
programs.

>> So this would be the first line of exploitation. If the nsca daemon 
>> has any bugs, then anybody hacking any of your customers would have 
>> something to work with. I'm not saying they'll have any luck, just 
>> that they know where to start
> 
> Yeah, NCSA worries me.  So does any daemon that accepts incoming
> connections.  I wish I could turn them all off, but then we wouldn't
> have a business.
> 
>> (and that I for one would try for the fun of it).
> 
> Whereas I would not.  Not only because of the legal penalties but also
> because I have a degree of respect for others.  Crackers would do it
> for the hell of it, I wouldn't expect any sysadmin with a sense of
> ethics to.  And I wouldn't trust any software with a team member who
> said he'd try hacking into somebody else's machine "for the fun of it",
> let alone security software.  I think I'll give owl a miss...
> 
What I meant was to try it against my own network. Neither unethical nor 
illegal, but equally fun.

>>> Then they somehow guess what other customers
>>> we monitor and what we've called their machines.
>>
>> Not necessarily. Generating looooong strings containing arbitrary 
>> garbage isn't really all that hard, but I guess you know your way 
>> around the code in libmcrypt and nsca, so you're already certain there 
>> aren't any problems there.
> 
> I worry about buffer overflows in everything.
> But again you're talking about needing a remote exploit first in order
> to be able to use a local exploit in order to be able to use the key
> exploit. 

This is generally the usual path every attacker takes to rooting a box. 
The key exploit just makes it a whole lot easier to root (and backdoor) 
more servers on the network.

> The monitoring server is slightly more vulnerable than the
> others because only it runs the NCSA daemon.  But since so few machines
> run an NCSA daemon compared to machines that run apache, bind, etc.,
> black-hats are less likely to spend time on it.

You're hoping for things that might not be true.
Remember; Assumption is the mother of all fuckups.

>> Give me a break. Anybody with a minimum of programming skill can write 
>> a program to relay tcp traffic through more hops than than you have 
>> hairs on your head. I've written one myself and it took me about 45 
>> minutes.
> 
> 
> So have many crackers. :(  But where somebody exploits through physical
> access there may well be signs left behind, which is enough to deter
> most people from trying it.

Rooting through physical access IS impossible to safeguard against, and 
it's supposed to be so I'm guessing it won't change.

>> Laws are good sometimes, but only if it's within the jurisdiction of 
>> whatever bureau you have enforcing it.
> 
> UK law makes it a criminal offence as long as one end (attacker or
> attacked machine) is in the UK.  Most countries have extradition laws.
> 
Conducting an international investigation is not a small thing to do, 
and will probably only happen if the crime has serious implications (my 
turn to assume). Also, it's worth noting that script-kiddies usually 
don't know anything about laws, and care about them even less.

>>> I suppose you've gone through the source of the entire kernel and every
>>> utility and application on your machines.
>>
>> Actually, I'm an Owl developer which means that's exactly what I do.
> 
> So why do exploits keep happening? 

Most exploits discovered over the past five years can be contributed to 
CBK (Crap Behind Keyboard), since many users don't know squat about how 
to set things up properly. This implies that coding standards have 
improved, and that security consciousness has increased among 
open-source hackers.

> Most open-source projects have at
> least one developer who is strict about enforcing good coding practises
> in changes.  If you know all the existing holes in everything, why
> haven't you submitted patches? 

I have.

> And why is there a large team of
> programmers vetting all open-source sources for this sort of problem?
> Did none of them know that you've already vetted EVERYTHING and they
> could have just asked you?
> 
I haven't gone through everything. Networking programs and setuid/setgid 
binaries have priority. The rest of them can't really be used to elevate 
privileges as long as the system is set up properly.

> You're not running a fully-audited version.  You never will, because
> new versions will appear faster than you can vet all changes to everything
> installed unless you're superhuman. 

This is a problem we're keenly aware of.
Owl currently uses glibc-2.1.3, which is two minor versions below the 
latest current one. This is a result of the fact that auditing the code 
for it took well over a year to complete.
The result is that the vast majority of our patches now can't easily be 
applied to the current glibc. We are working on it, and the glibc team 
take in every patch they think is good and doesn't break standards.

> You've vetted a couple of daemons, and
> a couple of critical libraries but not everything that could potentially
> be exploited. But at least you started with the ones most likely to be
> compromised. 

See above for this one. Also, a general problem with some programs is 
the usage of the /tmp directory (links are sometimes followed by setuid 
programs, without scrutiny). In Owl, this has been solved by adding 
/tmp/.private, under which each user has their own directory, readable 
only by him/her. This imposes the limitation that each system can only 
have a maximum of 32768 users. If an acute need should arise, this can 
however be worked around.

> The day you have everything audited and can keep on top of
> changes is the day you can corner the market in distros and probably
> topple Microsoft too.  Until then you have to worry about local exploits
> and remote exploits, just like the rest of us.
> 
Yes, probably so. But the downside of totally secure systems is lower 
userfriendliness. Most users want things to be easy, so they decrease 
security in favour of usability.

>> No, but it's every admins responsibility to check those that are 
>> setuid and those that do networking in an 'out-of-jail' fashion. Also, 
>> naturally, the libraries to which any program might be linked.
> 
> Then there are all the kernel exploits.  You cannot hope to check
> everything by yourself and you cannot check for exploits that rely
> upon principles you never thought of.

That's why Owl is developed by a team, and not by me alone.

> BTW, did you disassemble gcc?  There is no other way of being sure
> that you don't have a Ken Thompson back-door unless ALL owl development
> was done on machines that have never connected to the Internet before
> the standard password routines were replaced by Owl on those machines.
> Recompiling gcc from source, patched or not, does not get rid of a
> Ken Thompson back-door. 

Yes it does. In fact, simply renordering the variable declarations is 
enough to get rid of the kt-backdoor, since it's only capable of 
applying code to copies of itself, and not other programs (reordering 
the variable declarations generates different assembly and object code, 
which is enough for it to 'miss' on the recognition).
See http://www.jargon.net/jargonfile/b/backdoor.html for more info on 
this. I didn't find the sources for it, but the hack is long gone now.


Just to kill of a couple of the sections in this thread;
Local users without the root password MIGHT get acces to the system, and 
  if they manage to hack root, there isn't necessarily any way of 
telling they've been there.

The nagios server is the hackers primary target, since it provides him 
with a map of the network, and a listing of (most) of the services 
running on each host which means he doesn't have to run portscans or 
random connection attempts which might be picked up by any NIDS. Also, 
outgoing traffic from the nagios server is less likely to be detected as 
intrusive, since it already shoots packets high and low to every 
monitored part of the network.

Any passwordless key with or without restrictions can be used to obtain 
shell-access as the remote key-holders user.

In an environment where all the local users have access to the 
root-password, there's no need for setuid/setgid binaries.

There's no need for any networking daemon to run as root, since the port 
it listens to can easily be forwarded to one outside the privileged 
range with a few simple iptables rules. Coupled with chroot jails where 
possible, this can turn access granting exploits into mere DoS attacks. 
Still annoying, but not by a longshot as dangerous to business.


-- 
Mvh / Best Regards
Sourcerer / Andreas Ericsson
OP5 AB
+46 (0)733 709032
andreas.ericsson at op5.se


-------------------------------------------------------
This SF.Net email is sponsored by: IBM Linux Tutorials
Free Linux tutorial presented by Daniel Robbins, President and CEO of
GenToo technologies. Learn everything from fundamentals to system
administration.http://ads.osdn.com/?ad_id=1470&alloc_id=3638&op=click
_______________________________________________
Nagios-users mailing list
Nagios-users at lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nagios-users
::: Please include Nagios version, plugin version (-v) and OS when reporting any issue. 
::: Messages without supporting info will risk being sent to /dev/null





More information about the Users mailing list