Ad Widget

Collapse

Fastest metrics collector

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • drTr0jan
    Junior Member
    • Apr 2014
    • 6

    #1

    Fastest metrics collector

    Which kind of collectors is the fastest?
    I've some service that return a list of many metrics (about few hundreds) per query (in JSON format). AFAIUI I should write a wrapper that convert the list to a storage and a responder with key->value metrics.
    And what kind of item (collector) should I use?
    • Simple checks with Zabbix server module - AFAIK, the fastest method;
    • ODBC monitoring to the wrapper storage;
    • Trapper items trapped by the wrapper$
    • External checks - AFAIK, the slowest method.

    What about ODBC and trapper?
  • kloczek
    Senior Member
    • Jun 2006
    • 1771

    #2
    Speed des not depends on zabbix or monitoed application. It is only konsequence of the frequency sampling some data.
    Probably your questions are related to the latency (?)

    Latency does not need to be fastes ever possible. It needs to be "good enough" only.
    http://uk.linkedin.com/pub/tomasz-k%...zko/6/940/430/
    https://kloczek.wordpress.com/
    zapish - Zabbix API SHell binding https://github.com/kloczek/zapish
    My zabbix templates https://github.com/kloczek/zabbix-templates

    Comment

    • drTr0jan
      Junior Member
      • Apr 2014
      • 6

      #3
      I've not kept in mind "fastest" as a speed or a latency. It's a performance rather.

      Example, a hundred of items for the list of hundred metrics in an external checks forks a hundred of processes, where each process inquires the server with equal query. There're a hundred of process and a hundred of queries as a result. It's wrongly! How do it right?

      Comment

      • kloczek
        Senior Member
        • Jun 2006
        • 1771

        #4
        Originally posted by drTr0jan
        I've not kept in mind "fastest" as a speed or a latency. It's a performance rather.

        Example, a hundred of items for the list of hundred metrics in an external checks forks a hundred of processes, where each process inquires the server with equal query. There're a hundred of process and a hundred of queries as a result. It's wrongly! How do it right?
        Exact mass sampling technique is always quite closely related to exact monitored applications/objects. I don't think that it is any generic way of doing such things.

        I can give you example on top of the java.
        Each JVM mbean read operation causes snapshoting internal structures on top of which are organized mbeans. Sometimes those structures are per class, sometimes are per thread and sometimes only per CPU. For example mbean counter with created classes is per CPU and on reading mbean with such counter all per each CPU counters will be read, aggregated and delivered to the calass reading exactly this mbean.
        Why it is done this way? Because single memory address holding such counter will cause many delays on locking/unlocking when multiple threads will be creating new classes.
        Each time single mbean read operation causes snapshoting all those internal structures. During make shapshots JVM is locked. When snapshot is done JVM continues its own work then from snapshoted data are extracted some aggregations are calculated then result is sent to the class querying exact mbean.
        If you have a lot of mbeans to read it would be very ineffective doing such mbeans read sequentially because:
        a) each mbean read sampling time will be slightly shifted on time scale
        b) massive mbeans reading will be always hammering JVM

        What could be done to improve monitoring of JVM is read mbean in batches to because each batch will be using only one internal snapshot and all mbeans reads will be aligned on time scale.

        So ..is it really relevant does such monitoring will be done over loadable module, external script? Answer is of course "no" because in exactly JVM case more important is how you are interacting with JVM on obtaining data than using what kind of binary code or script it is done.
        In other cases than JVM generally it may be even completely different pattern of approaches not possible to reuse on monitoring other thins.

        So again .. discussing such topic it would be better to know a bit more what exactly you are going to monitor trying to sample some data on massive scale.
        http://uk.linkedin.com/pub/tomasz-k%...zko/6/940/430/
        https://kloczek.wordpress.com/
        zapish - Zabbix API SHell binding https://github.com/kloczek/zapish
        My zabbix templates https://github.com/kloczek/zabbix-templates

        Comment

        • nelsonab
          Senior Member
          Zabbix Certified SpecialistZabbix Certified Professional
          • Sep 2006
          • 1233

          #5
          @kloczek, I think drTr0jan is essentially asking "If I had a massive amount of metric data, which mechanism is the most efficient/fastest/quickest way to import those items into Zabbix." I do not believe they are asking about the details under the covers.

          DrTr0jan, there are a few ways to skin this cat. I would first suggest having a look at "Dependent Items." With this you can pull the full set of data, then parse the results and store them in individual items. This will likely be the most flexible and I recommend you start here.

          Next would be to write a wrapper script and a set of userparameters. Your wrapper script pulls the full set of metric data and stores it to a file. Then when you call the wrapper script for a particular item it first checks the age of the cached data, if it's too old it will refresh the data. Then it will search the cached data and return the value wanted. The userparameter would be for calling this wrapper.

          Another method would be to write a custom script which is run by cron and parses the metric data and then uses the zabbix_sender to push the data to Zabbix.

          Finally you could do a small modification of option #2 above and implement it as a loadable module (by far the most difficult).
          RHCE, author of zbxapi
          Ansible, the missing piece (Zabconf 2017): https://www.youtube.com/watch?v=R5T9NidjjDE
          Zabbix and SNMP on Linux (Zabconf 2015): https://www.youtube.com/watch?v=98PEHpLFVHM

          Comment

          Working...