Ad Widget

Collapse

A thought on extending the agent

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • luminescentsimian
    Junior Member
    • Jun 2012
    • 4

    #1

    A thought on extending the agent

    I've noticed a couple patterns in example monitoring setups for various services.
    1. there is on script to dump the stats from the daemon and translate it into an easily parse-able form that either another script or grep & cut can extract through a series of long nasty UserParameter lines.
    2. A script is run via cron to extract the data from the daemon then upload it using zabbix_sender.

    In both cases the daemons in question only provide their data in an oddly formatted ZOMGEVERYTHING dump that is expensive to generate and could impact performance if called too often.

    I got to thinking about a better way to handle this came came up with something along the lines of memcached or possibly even just using memcached to store the key-values generated by these data scrapping scripts. The way I'm imagining this working right now is a new UserParameter type option in the agent that you specify a base key (bind9[*]), an life-time for the value, and a script scrapes the data out of the daemon and formats it into a list key: values. The agent then takes the output and stores it (in a memcached or internally) and responds to requests for that base key out of the cache until it expires, at which point it re-runs the script to get fresh data.
    IE:
    Code:
    UserParameterKey=bind9,30,bind9stats.py
    Code:
    requests.query: 2097
    requests.notify: 6
    queries.a: 1352
    queries.mx: 181
    zones.discovery: JSONBLAH
    The template can then look for bind9.requests.query and bind9.requests.notify, etc

    This way, instead of a wall of UserParameter= text that needs to be customized on every host individual daemon's stat gathering can be defined in the agent simply and any host specifics (paths, usernames, etc) can be declared only once. Also, it provides a simple, clean, consistant mechanism for feeding data into the agent and thus Zabbix without having to with the specifics zabbix_sender and Zabbix hostname individually for each monitored daemon, or deal with cron and items bouncing between supported & non-supported constantly because the cron'd script isn't running as fast as the zabbix_server likes.

    Comments?

    I may take a stab at implementing this, but it's been a while since I dealt with straight C.
  • luminescentsimian
    Junior Member
    • Jun 2012
    • 4

    #2
    First proof of concept: Postfix

    So, I've put together a first test setup and I'm liking the results so far. I took the sample from http://www.zabbix.com/wiki/howto/mon...itoringpostfix and tweaked it. I replaced the cron script with a python script that outputs key-value pairs:
    Code:
    #!/usr/bin/env python
    from os import *
    
    statsscript = 'sudo /usr/sbin/pflogsumm -h 0 -u 0 --bounce_detail=0 --deferral_detail=0 --reject_detail=0 --no_no_msg_size --smtpd_warning_detail=0 /var/log/mail.log'
    
    
    fd = popen(statsscript, 'r')
    
    for ln in fd:
            if ln.strip() == 'messages':
                    break
    
    for ln in fd:
            if ln.strip() == 'Per-Hour Traffic Summary':
                    break
            if not ln.strip():
                    continue
            (val,key) = ln.strip().split(' ', 1)
            key=key.strip()
            val=val.strip()
            print "postfix[%s]:%s" % (key,val)
    Code:
    postfix[received]:4
    postfix[delivered]:4
    postfix[forwarded]:0
    postfix[deferred]:0
    postfix[bounced]:0
    postfix[rejected (0%)]:0
    postfix[reject warnings]:0
    postfix[held]:0
    postfix[discarded (0%)]:0
    postfix[bytes received]:3596
    postfix[bytes delivered]:3596
    postfix[senders]:3
    postfix[sending hosts/domains]:2
    postfix[recipients]:2
    postfix[recipient hosts/domains]:2
    My zabbix_agentd.conf has
    Code:
    UserParameter=postfix[*],zbxcache /etc/zabbix/scripts/postfix 290 'postfix[$1]'
    zbxcache is a C wrapper to memcached I threw together as a stepping stone towards implementing this functionality directly in zabbix_agent
    Attached Files

    Comment

    • luminescentsimian
      Junior Member
      • Jun 2012
      • 4

      #3
      Linux MD RAID devices, with discovery

      For the agent config
      Code:
      UserParameter=md[*],/usr/local/bin/zbxcache /etc/zabbix/zbxmd.py 60 'md[$1,$2]'
      UserParameter=md.discovery,/usr/local/bin/zbxcache /etc/zabbix/zbxmd.py 60 md.discovery
      The python script to gather the stats and generate the discovery info. It needs python 2.5 for the JSON module, and a new enough Linux kernel that exports the MD stats in sysfs
      Code:
      #!/usr/bin/env python
      
      from glob import glob
      import os
      import json
      
      devices = []
      
      for devpath in glob('/sys/block/md*'):
              device = os.path.basename(devpath)
              devices += [{'{#MDDEV}':device}]
              for prop in os.listdir(os.path.join(devpath,'md')):
                      proppath = os.path.join(devpath,'md',prop)
                      if (not os.path.isfile(proppath)):
                              continue
                      if (not os.access(proppath, os.R_OK)):
                              # write-only file, ignore
                              continue
                      try:
                              f = open(proppath, 'r')
                              val = f.read()
                      except:
                              continue
                      print 'md[%s,%s]: %s' % (device,prop,val),
      
      print 'md.discovery:', json.dumps({'data':devices})
      I haven't tested the prototype triggers yet
      Attached Files

      Comment

      Working...