Antwort: Re: Perf-Problem - Not more than 255 Childs?

h.baecker at ecofis.de h.baecker at ecofis.de
Mon Jun 21 09:26:32 CEST 2004


I'm sorry, this is not the answer because the system doesn't swap heavily 
and the memory is ok too.

But I think there are some Kernel limits to 255 Procesess...

###
less /usr/src/linux/include/linux/limits.h
[...]
#define OPEN_MAX         256    /* # open files a process may have */
[...]
###

We will try to fix this in a few days...

Thanks




Marco Ramos <mramos at co.sapo.pt> 
18.06.2004 13:25

An
h.baecker at ecofis.de
Kopie
nagios-users at lists.sourceforge.net
Thema
Re: [Nagios-users] Perf-Problem - Not more than 255 Childs?







This should be the answer to your problem:

http://www.nagios.org/faqs/viewfaq.php?faq_id=115

HTH,
mramos

On Fri, 2004-06-18 at 10:15, h.baecker at ecofis.de wrote:
> Hi List,
> 
> I have som performance Problems with my Nagios. It is running on an
> IBM Server with following specs:
> 
> CPU Info:
> 4 x                 Intel(R) XEON(TM) CPU 2.00GHz
> 
> Mem Info:
>         total:                    used:                free: 
> shared:         buffers:          cached:
> Mem:  2649300992         2368167936         281133056        0 
> 672956416         949780480
> Swap: 1081470976         48660480         1032810496
> 
> I think it is not a small system.
> 
> All about we've got 613 Hosts and 2600 Services.
> 
> I would say that 99% of the service checks have an check_interval =
> 600 seconds.
> 
> The Process Info says that just 75% of the whole checks where checked
> within a 5 minute intervall.
> 
> During my examine of nagios and system I found out that max 255
> procesess owned by nagios with service checks etc. Is there a limit of
> maximum procs?
> or:
> How can I optimize Nagios to check all of the Services within 5
> minutes? Any ideas?
> 
> Thanks for hopefully a much of answers.
> 
> Hendrik



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://www.monitoring-lists.org/archive/users/attachments/20040621/af69cfc8/attachment.html>


More information about the Users mailing list