Distributed Setup of Nagios

Kyle O'Donnell nagios at isprime.org
Thu Aug 19 16:39:30 CEST 2010


using the professional or enterprise, not sure the branding anymore...

The mechanism we use to pass event data from the nagios pollers to the top
level was developed by us.  The methods internal to nagios; using OCHP and
OCSP, or the performance data processor, to pass events did not meet our
requirements... it added to much overhead to nagios and was limiting the
number of active host/service checks a single nagios instance could
perform.

The version GW and I hacked together was fairly simple, it was also based
on some code GW used for another function within their product, but was
easily recyclable.  It was perl script that ran as a daemon, read in the
nagios status log, parsed it, sent messages back to the top level nagios
via nsca.

It was totally rewritten and a bunch of other features were added by one
of my colleagues.

I am not sure if we are allowed to opensource this code... but I would
sure like the guy who wrote it to! (I know you're watching)

Outside of figuring the config synchronization of each nagios pollers,
which you can use monarch for (part of GW), this bit of code would make it
very simple to build large distributed nagios installs.

We have done some other significant changes to GW to scale as large as we
did, but these had less to do with nagios and more do to with GW.  We are
using a ramdisk for the nagios log directory, which helps quite a bit for
nagios performance.

--kyleo

On Thu, 19 Aug 2010 10:57:23 +0200, Bradley Radjoo
<Bradley.Radjoo at is.co.za> wrote:
> Cool :-)
> 
> Kyle, I assume you're not using the Community Edition of Groundwork
> Monitor.
> 
> Because I was hoping to find some help on this mailing about a a
> Distributed Setup using
> a single instance of Groundwork Monitor Community Edition version 6+
with
> multiple Nagios 3 remote servers...
> This proves to be very hard to come by.
> 
> Any help would be really appreciated
> 
> On 18 Aug 2010, at 6:17 PM, Kyle O'Donnell wrote:
> 
>> 
>> groundwork monitor
>> 
>> 
>> On Wed, 18 Aug 2010 17:54:36 +0200, Bradley Radjoo
>> <Bradley.Radjoo at is.co.za> wrote:
>>> WoW ! That is definately impressive 
>>> 
>>> Would this  be the Opsview Community edition Kyle ?
>>> 
>>> On 18 Aug 2010, at 5:07 PM, Kyle O'Donnell wrote:
>>> 
>>>> we have ~ 30000 services and ~3000 hosts
>>>> 
>>>> we have 6 pollers (each have a backup) processing checks and
forwarding
>>>> back to a central nagios host.
>>>> 
>>>> our busiest poller has ~1000 hosts and ~9000 services... avg service
>>>> check
>>>> interval is 5 minutes, but there are a bunch at 1 and 2 minute
>> intervals.
>>>> 
>>>> avg service check latency is less than 1 second
>>>> 
>>>> This is ~3yr old hardware too, i suspect we could increase capacity
by
>>>> 50%
>>>> if we move to the new intel nahalems
>>>> 
>>>> we dont use active host checks
>>>> On Wed, 18 Aug 2010 15:51:55 +0100, Ton Voon <tonvoon at gmail.com>
wrote:
>>>>> On 18 Aug 2010, at 15:38, Max wrote:
>>>>> 
>>>>>> On Wed, Aug 18, 2010 at 7:22 AM, Ton Voon <tonvoon at gmail.com>
wrote:
>>>>>>> You may want to look at Opsview (http://opsview.com).
>>>>>>> 
>>>>>>> From a single point of configuration, it pushes out the nagios
>>>>>>> configuration to the remote slaves which are independently running
>>>>>>> their own copy of Nagios. We have users going up to 25 slaves!
>>>>>> 
>>>>>> Cool - how many active service checks / active host checks per
>> poller?
>>>>> 
>>>>> As many as a single nagios instance runs. You can scale out by
adding 
>> 
>>>>> more slaves. We also have a feature where you can have slave
clusters 
>> 
>>>>> to do workload balancing and redundancy, so you can just add another

>>>>> node if hardware is the issue.
>>>>> 
>>>>> The bottleneck would be at the central master, but that is very fast

>>>>> because of only processing passive results.
>>>>> 
>>>>> Ton
>>>>> 
>>>>> 
>>>>> 
>>>> 
>>
------------------------------------------------------------------------------
>>>>> This SF.net email is sponsored by 
>>>>> 
>>>>> Make an app they can't live without
>>>>> Enter the BlackBerry Developer Challenge
>>>>> http://p.sf.net/sfu/RIM-dev2dev 
>>>>> _______________________________________________
>>>>> Nagios-users mailing list
>>>>> Nagios-users at lists.sourceforge.net
>>>>> https://lists.sourceforge.net/lists/listinfo/nagios-users
>>>>> ::: Please include Nagios version, plugin version (-v) and OS when
>>>>> reporting any issue. 
>>>>> ::: Messages without supporting info will risk being sent to
/dev/null
>>>> 
>>>> 
>>
------------------------------------------------------------------------------
>>>> This SF.net email is sponsored by 
>>>> 
>>>> Make an app they can't live without
>>>> Enter the BlackBerry Developer Challenge
>>>> http://p.sf.net/sfu/RIM-dev2dev 
>>>> _______________________________________________
>>>> Nagios-users mailing list
>>>> Nagios-users at lists.sourceforge.net
>>>> https://lists.sourceforge.net/lists/listinfo/nagios-users
>>>> ::: Please include Nagios version, plugin version (-v) and OS when
>>>> reporting any issue.
>>>> ::: Messages without supporting info will risk being sent to
/dev/null
>>> 
>>> Please note: This email and its content are subject to the disclaimer
as
>>> displayed at the following link
>>> 
>>
http://www.is.co.za/legal/E-mail+Confidentiality+Notice+and+Disclaimer.htm.
>>> Should you not have Web access, send a mail to disclaimers at is.co.za
and
>> a
>>> copy will be emailed to you.
>>> 
>>> 
>>
------------------------------------------------------------------------------
>>> This SF.net email is sponsored by 
>>> 
>>> Make an app they can't live without
>>> Enter the BlackBerry Developer Challenge
>>> http://p.sf.net/sfu/RIM-dev2dev 
>>> _______________________________________________
>>> Nagios-users mailing list
>>> Nagios-users at lists.sourceforge.net
>>> https://lists.sourceforge.net/lists/listinfo/nagios-users
>>> ::: Please include Nagios version, plugin version (-v) and OS when
>>> reporting any issue. 
>>> ::: Messages without supporting info will risk being sent to /dev/null
>> 
>>
------------------------------------------------------------------------------
>> This SF.net email is sponsored by 
>> 
>> Make an app they can't live without
>> Enter the BlackBerry Developer Challenge
>> http://p.sf.net/sfu/RIM-dev2dev 
>> _______________________________________________
>> Nagios-users mailing list
>> Nagios-users at lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/nagios-users
>> ::: Please include Nagios version, plugin version (-v) and OS when
>> reporting any issue.
>> ::: Messages without supporting info will risk being sent to /dev/null
> 
> Please note: This email and its content are subject to the disclaimer as
> displayed at the following link
>
http://www.is.co.za/legal/E-mail+Confidentiality+Notice+and+Disclaimer.htm.
> Should you not have Web access, send a mail to disclaimers at is.co.za and
a
> copy will be emailed to you.
> 
>
------------------------------------------------------------------------------
> This SF.net email is sponsored by 
> 
> Make an app they can't live without
> Enter the BlackBerry Developer Challenge
> http://p.sf.net/sfu/RIM-dev2dev 
> _______________________________________________
> Nagios-users mailing list
> Nagios-users at lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/nagios-users
> ::: Please include Nagios version, plugin version (-v) and OS when
> reporting any issue. 
> ::: Messages without supporting info will risk being sent to /dev/null

------------------------------------------------------------------------------
This SF.net email is sponsored by 

Make an app they can't live without
Enter the BlackBerry Developer Challenge
http://p.sf.net/sfu/RIM-dev2dev 
_______________________________________________
Nagios-users mailing list
Nagios-users at lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nagios-users
::: Please include Nagios version, plugin version (-v) and OS when reporting any issue. 
::: Messages without supporting info will risk being sent to /dev/null





More information about the Users mailing list