Deletion of the performance data log files.

Ben Clewett Ben at clewett.org.uk
Thu May 20 12:32:42 CEST 2004


Ton,

I like the work you have done.

I have no problem working in this way.  It does give instant data to the 
analysis program.  It is in the end a simpler solution.

I am slightly worried about the extra load on the system.  My code (for 
example) does extensive analysis of the data.  Later stages may make 
upwards of 100 calls to SQL for each line of data.  (Basically, lots of 
pre-processing to make analysis reporting fast.)

If option #2 is used, this can be called nightly and therefore not 
effect day users trying to use Nagios as it is.  Another advantage is 
that being a file, the processing can be completed on other machines. 
As long as a way to pass a signal to another machine to tell it to parse 
can be found, and the log directory is shared.

In both cases, leaving the Nagios machine to be Nagios.  Without 
cludging the CPU trying to do advanced statistical analysis on the data...

I dare not mention it, since two standard methods already exist, and I 
know people are patching the code to make even more methods.  But can 
both these new methods also be supported?

Or, make an adapter for *any* method:

Pipe the variables to a generic deamon in all and every case in a 
standard format.  It would then be very simple to rewrite the deamon, or 
select a pre-written deamon from my self, Ton, or given with the Nagios 
package...

Ben


Voon, Ton wrote:

> In my installation, I use 1. This works very well. It just reads the pipe
> and then enters the data into a database - a patch is required to open the
> perf file with "w", not "a":
> 
> *** xpdfile.c   Sun Jun 30 20:58:35 2002
> --- xpdfile.c.ton       Fri Mar  7 14:25:55 2003
> ***************
> *** 92,98 ****
>         /* open the service performance data file for writing */
>         if(xpdfile_service_perfdata_file!=NULL){
> 
> !
> xpdfile_service_perfdata_fp=fopen(xpdfile_service_perfdata_file,"a");
> 
>                 if(xpdfile_service_perfdata_fp==NULL){
>                         snprintf(buffer,sizeof(buffer),"Warning: File '%s'
> could not be opened for w
> riting - service performance data will not be
> processed!\n",xpdfile_service_perfdata_file);
> --- 92,98 ----
>         /* open the service performance data file for writing */
>         if(xpdfile_service_perfdata_file!=NULL){
> 
> !
> xpdfile_service_perfdata_fp=fopen(xpdfile_service_perfdata_file,"w");
> 
>                 if(xpdfile_service_perfdata_fp==NULL){
>                         snprintf(buffer,sizeof(buffer),"Warning: File '%s'
> could not be opened for w
> riting - service performance data will not be
> processed!\n",xpdfile_service_perfdata_file);
> 
> The daemon basically does this:
> 
> # Daemonize
> my $pid = fork;
> exit if $pid;
> die "Couldn't fork: $!" unless defined $pid;
> use POSIX;
> POSIX::setsid() or die "Cannot daemonize";
> 
> use lib "/opt/perl/lib/perl5/site_perl","/opt/perl/lib/site_perl";
> use DBI();
> 
> print "Started insert_perfdatad daemon($$)",$/;
> 
> my $dbh = DBI->connect("DBI:mysql:database=$nagios;host=localhost",
>                 "nagios", "burn0ut",
>                 {'RaiseError' => 1, AutoCommit => 1 }
>                 );
> 
> open PIPE, "< /usr/local/$nagios/var/serviceperf.log" or die "Cannot open
> serviceperf.log";
> 
> my ($sth, $time, $host, $service, $perfinfo, $hsid);
> while (<PIPE>) {
>         chop;
>         ($time, $host, $service, $perfinfo) = split ("\t", $_);
>         next unless ($perfinfo);
>         $perfinfo =~ s/(\w)= *([\d.])/$1=$2/g;          # Strip leading
> spaces
>         
>         $sth = $dbh->prepare("SELECT hsid from hostservice where
> host='$host' and service='$service'
> ");
>         $sth->execute();
>         $hsid = $sth->fetchrow;
>         $sth->finish();
>         
>         if ($hsid) {
>                 $hsid = "'$hsid'";
>         } else {
>                 $dbh->do("insert into hostservice (host, service) values
> ('$host', '$service')");
>                 $hsid = "LAST_INSERT_ID()";
>         }
> 
>         $dbh->do("insert into perfdata (datetime, hsid, perfinfo) values
> (from_unixtime($time), $hsi
> d, '$perfinfo')");
> }
> close PIPE;
> 
> So it's nice because it just inserts the perf data as it is written to the
> pipe. There is a dependency on this daemon so it is in the startup of
> Nagios.
> 
> I think the main advantage I've had with the perfdata is that it is in a
> database (so I've written separate cgi routines to display), so support for
> the Real-Time Data Output to a database is crucial.
> 
> Ton
> 
> -----Original Message-----
> From: Ben Clewett [mailto:Ben at clewett.org.uk] 
> Sent: 20 May 2004 09:00
> To: Ethan Galstad
> Cc: nagios-devel at lists.sourceforge.net
> Subject: Re: [Nagios-devel] Deletion of the performance data log files.
> 
> Ethan Galstad wrote:
> 
>>Two other options exist:
>>
>>1.  Point the perf data logs to a named pipe, which is then read (and
>>flushed) by an external daemon that processes the data
>>
>>2.  Modify Nagios so that at predefined intervals:
>>	a. The perf data log is closed
>>	b. A user-defined command is run (to rotate the log, process perf 
>>data, etc.)
>>	c. The perf data is re-opened
>>
>>
>>The first option is the cleanest, but it requires another daemon.  
>>The second option is probably a bit more flexible.  What are people's 
>>thoughts on this?  Should I add option #2 into Nagios 2.0?
> 
> 
>  From the perspective of the parsing program I maintain (PerfParse), either
> or both options could be used.
> 
> But I would prefer the second my self.  This would give flexibility as some
> users could parse this nightly, some every minute.  Depending on the ability
> of the machine, and reasons for the parse.  It would also require less
> dependencies.  Eg, the daemon in method #1 might crash effecting Nagios.
> 
> If the second is used, can I suggest that whatever command is specified, it
> includes parameters giving the path of the log file.  Eg:
> 
> "perfparse %service-perf-file-name% %host-perf-file-name%"
> 
> Just some ideas. :)
> 
> Regards, Ben.
> 
> 
> 
> -------------------------------------------------------
> This SF.Net email is sponsored by: Oracle 10g Get certified on the hottest
> thing ever to hit the market... Oracle 10g. 
> Take an Oracle 10g class now, and we'll give you the exam FREE.
> http://ads.osdn.com/?ad_id=3149&alloc_id=8166&op=click
> _______________________________________________
> Nagios-devel mailing list
> Nagios-devel at lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/nagios-devel
> 
> 
> This private and confidential e-mail has been sent to you by Egg.
> The Egg group of companies includes Egg Banking plc
> (registered no. 2999842), Egg Financial Products Ltd (registered
> no. 3319027) and Egg Investments Ltd (registered no. 3403963) which
> is authorised and regulated by the Financial Services Authority. Egg
> Investments Ltd. is entered in the FSA register under number 190518. 
> 
> Registered in England and Wales. Registered offices: 1 Waterhouse
> Square, 138-142 Holborn, London EC1N 2NA.
> 
> If you are not the intended recipient of this e-mail and have received
> it in error, please notify the sender by replying with 'received in
> error' as the subject and then delete it from your mailbox.
> 



-------------------------------------------------------
This SF.Net email is sponsored by: Oracle 10g
Get certified on the hottest thing ever to hit the market... Oracle 10g. 
Take an Oracle 10g class now, and we'll give you the exam FREE.
http://ads.osdn.com/?ad_id=3149&alloc_id=8166&op=click




More information about the Developers mailing list