Perfdata from distributed nodes?

Frost, Mark {PBG} mark.frost1 at pepsi.com
Wed Feb 13 16:22:13 CET 2008


 

>-----Original Message-----
>From: nagios-users-bounces at lists.sourceforge.net 
>[mailto:nagios-users-bounces at lists.sourceforge.net] On Behalf 
>Of Marc Powell
>Sent: Tuesday, February 12, 2008 11:05 PM
>To: Nagios Users Mailinglist
>Subject: Re: [Nagios-users] Perfdata from distributed nodes?
>
>
>On Feb 12, 2008, at 9:53 PM, Frost, Mark {PBG} wrote:
>
>>
>> I'm moving our current Nagios 2.10 configuration to a Nagios 3.0rc2
>> distributed configuration.  One component that isn't working 
>for me is
>> nagiosgraph.  After some investigation, it appears that the 
>problem is
>> that while I'm sending the host/service check results to the central
>> server, none of the perfdata is making it there.  Some of 
>our checks  
>> are
>> processed for nagiosgraph via their host/service output and some by
>> their performance data.  This is all working on our 2.10 
>configuration
>> because none of the perfdata has to travel anywhere (i.e. it's all on
>> one server).
>>
>> So I thought I'd be clever and modify the send-service-perfdata and
>> send-host-perfdata to include $SERVICEPERFDATA$.  That is:
>>
>> define command {
>>        command_name    send_service_check
>>        command_line    $USER5$/send_service_check.pl $HOSTNAME$
>> '$SERVICEDESC$' $SERVICESTATEID$ '$SERVICEOUTPUT$' '$SERVICEPERFDATA'
>> }
>
>I'm unable to test this but the correct format for nagios to 
>recognize  
>performance data is 'output | perfdata'. Does using the following  
>command_line work as expected --
>
>command_line    $USER5$/send_service_check.pl $HOSTNAME$
>'$SERVICEDESC$' $SERVICESTATEID$ '$SERVICEOUTPUT$ | $SERVICEPERFDATA$'
>
>
>Note that you were also missing the final $ in 
>$SERVICEPERFDATA above.  
>It might have been an e-mail typo only though.
>
>--
>Marc

Marc,

Actually, yes that was an e-mail typo.  I had shut all this down by the
time I sent the e-mail that I was just reproducing it in e-mail.  I
turned everything back on now so I could demonstrate it.

My "send_*_check.pl" scripts really just take the fields and stuff them
in a file for another program I wrote to process.  Per the previous
thread I'd posted, letting the process-*-perfdata Nagios command/script
do the actual send_nsca work was sending my latencies sky-high.  I had
understood that the idea here was to make Nagios think the work had been
completed as quickly as possible to keep latencies low.  I'd seen the 2
other solutions both of which involved a FIFO, but I wasn't as
comfortable with that in case there was a problem.  My solution, have my
checks write to files that are then picked up and sent every 10 seconds
by a 2nd independent daemon process has resulted in very low latencies
and has worked very well.

My send_*_check.pl scripts write each of these fields to a file
separated by tabs.  The daemon program then grabs those lines and pumps
them through send_nsca to the server.  Send_nsca also wants to use tabs
as the default field separator so I don't have to do anything with the
lines and it works nicely.

But your point is well-taken.  While I don't need to stick the pipe
symbol there, I do need it for Nagios to ultimately understand that
that's perfdata.  I put the pipe symbol as you suggested in my
send_*_check.pl scripts (essentially using the pipe to combine service
output and service perfdata when writing fields to my files).

That seemed to do the trick.  I knew I was missing something :-)

Thanks.

Mark

-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
Nagios-users mailing list
Nagios-users at lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nagios-users
::: Please include Nagios version, plugin version (-v) and OS when reporting any issue. 
::: Messages without supporting info will risk being sent to /dev/null





More information about the Users mailing list