Ad Widget

Collapse

Simpler way to monitor HAProxy metrics

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • gaidukas
    Junior Member
    • Jul 2013
    • 12

    #1

    Simpler way to monitor HAProxy metrics

    I wanted to monitor the haproxy metrics and read some examples from others using special scripts and also using unix socket and socat. I made some progress but wanted a simpler solution that would be more flexible and easy to add more items when new services were added to haproxy's config.

    Here is my solution:

    I first need to allow accessing the stats via http. So in the haproxy.cfg file I added:

    listen admin
    bind *:8080
    stats enable
    stats auth admin:<MyPASSWORD>


    I use curl to access the metrics in csv format. So on the haproxy host I made sure curl was installed (it already was) and then added the following zabbiz config file:

    /etc/zabbix/zabbix_agentd.d/userparameter_haproxy.conf

    In it I placed:

    UserParameter=haproxy.stats[*], curl -u admin:<MyPASSWORD> "http://localhost:8080/haproxy?stats;csv" 2>/dev/null | grep "^$1,$2" | cut -d, -f $3

    Now its simple, all I need to do is send 3 params: proxyname, servicename and the column number (+1) of the metric I'm looking for.

    So from the zabbix server I can do:

    > zabbix_get -s haproxy_hostname -k [haproxy.stats["ProxyName","BACKEND","18"]
    < OK

    NOTE: passing column # 18 is actualy metric item 17 as reported in the CSV table here: http://cbonte.github.io/haproxy-dcon...n-1.4.html#9.1

    Now its easy to add zabbix items for all the metrics needed from haproxy.

    I hope this helps others...

    -Glen
    Last edited by gaidukas; 01-10-2013, 02:34.
  • Pada
    Senior Member
    • Apr 2012
    • 236

    #2
    Nice!

    The only drawback to your solution is that there is no caching, so if you're going to pull 300+ data points from that CSV file - like we're doing - curl is going to request that CSV page way too many times.

    I wrote a PHP script to query the CSV file once with CURL and then to use zabbix_sender to send ALL the data points in one shot to Zabbix, where all the items are configured as Trapper items.
    Unfortunately we're still on Zabbix 1.8, so it is quite a lot of effort to add all the items that we want to monitor!
    My PHP script does take the column name and not just a column index, which helps in making less faults when adding the items in Zabbix.

    My next step would probably be to add Zabbix API calls to automatically add all the items that HAProxy exposes into Zabbix. It would be less effort for me to do this than to upgrade our modified Zabbix 1.8 to 2.0, and then add LLD (Low Level Discovery).

    Comment

    • gaidukas
      Junior Member
      • Jul 2013
      • 12

      #3
      I agree. It is a bit costly for pulling lots of data points.

      We only care about 30 to 50 items at the moment so for now its not an issue. I just wanted to get it up and working without lots of extra components or complexity.

      I will look into doing a script like your talking about some time down the road.

      -Glen

      Comment

      • anapsix
        Junior Member
        • Oct 2015
        • 1

        #4
        caching script and auto-discovery

        For the record, I've had to make an discovery script and template for my environment, as well as caching stats-getting script, It's using socat to connect to a local socket, but can be updated to use curl instead.

        A short blog post about it: http://random.io/haproxy/
        GitHub link: https://github.com/anapsix/zabbix-haproxy

        Hope it helps!

        Please feel free to contribute / fork..

        Comment

        Working...