This is the documentation page for an unsupported version of Zabbix.
Is this not what you were looking for? Switch to the current version or choose one from the drop-down menu.

8 What's new in Zabbix 2.2.3

8.1 SNMP bulk requests

SNMP monitoring performance is significantly improved by introducing bulk requests with at most 128 items. The load on Zabbix server and monitored SNMP devices should be greatly reduced:

  * regular SNMP items benefit from GetRequest-PDU with a large number of variable bindings;
         * SNMP low-level discovery rules for SNMPv2 and SNMPv3 benefit from GetBulkRequest-PDU with a large value of "max-repetitions" field;
         * SNMP items with dynamic indexes benefit from both of these improvements: one for index verification and another for building of the cache.

See more information about SNMP bulk processing.

8.2 Frontend improvements

8.2.1 Updated translations

  • Brazilian Portuguese
  • Italian
  • Japanese
  • Slovak
  • Turkish

8.3 Daemon improvements

  • Graph processing performance during low level discovery has been significantly improved. Testing with 2048 graphs showed a 600 times smaller amount of SQL requests during the initial discovery. Further runs without changes showed a 2500 times smaller amount of SQL requests, and if a change to graph name was required, the SQL request count was 1500 times lower. The total size of SQL statements was 3.7 times lower for the initial discovery, 3000 times lower for further runs without changes and 1500 times lower when a change to graph name was required.
  • Graphs created by low-level discovery from now on will not be deleted and will still work if relevant items are not discovered anymore (until those items get deleted).
  • Batch processing of IT services has been added. It resolves possible deadlocks and improves performance when processing large IT service trees. Testing with 800 IT services and having a tree depth of 4 levels showed a 300% performance improvement.
  • Significantly improved log file monitoring (log[] and logrt[] items):
    • more efficient log file reading and matching of records against regular expression.
    • more efficient selecting of log files when checking logrt[] items.
    • for log file records longer than 256 kB only the first 256 kB are matched against the regular expression and the rest of the record is ignored. However, if Zabbix agent is stopped while it is dealing with a long record the agent internal state is lost and the long record may be analyzed again and differently after the agent is started again.
    • for log[] items: if there is a problem with the log file (e.g. it does not exist or is not readable) the log[] item now becomes NOTSUPPORTED. Before the change (in 2.2.2) it did not go into NOTSUPPORTED state because of a bug in the agent.
    • for logrt[] items:
      * On UNIX platforms a ''logrt[]'' item becomes NOTSUPPORTED if a directory where the log files are expected to be found does not exist.
             * Unfortunately, on Microsoft Windows if a directory does not exist the item will not become NOTSUPPORTED (for example, if directory is misspelled in item key). Currently this is a limitation of agent.
             * An absence of log files for ''logrt[]'' item does not make it NOTSUPPORTED.
             * Errors of reading log files for ''logrt[]'' item are logged as warnings into Zabbix agent log file but do not make the item NOTSUPPORTED.
         * Zabbix agent log file can be helpful to find out why a ''log[]'' or ''logrt[]'' item became NOTSUPPORTED. Zabbix can monitor its agent log file except when DebugLevel=4.
         * Please note that even though performance of ''log[]'' and ''logrt[]'' item checks has been improved the limits on maximum number of log file records analyzed and number of matching records sent to server in one check are not modified. For example, if a ''log[]'' or ''logrt[]'' item has //Update interval// 1 second, by default the agent will not analyze more than 400 log file records and will not send more than 100 matching records to Zabbix server in one check. By increasing **MaxLinesPerSecond** parameter in agent configuration file or setting **maxlines** parameter in the item key the limit can be increased up to 4000 analyzed log file records and up to 1000 matching records sent to Zabbix server in one check. If the //Update interval// is set to 2 seconds the limits for one check would be set 2 times higher than for //Update interval// 1 second.
       * Startup and shutdown scripts for Java gateway no longer hide error messages on startup. They now also detect stale PID files and should work in /bin/sh.
       * Value cache reporting more free space than really available has been fixed.
       * Improved error messaging for VMware items. Now instead of a generic error message "Simple check is not supported" there will be a failure specific message.
       * Maximum data transfer size increased from 64MB to 128MB to stay compatible with previous versions of Zabbix. In the case of one process with a data transfer limit of 128MB sending data to another with a 64MB limit, the receiving process would drop the data due to it exceeding the size limit.
       * Maximum configuration cache size increased to 8GB from 2GB
       * On Oracle databases variable binding is now used for bulk inserts, resulting in much better performance.

8.4 Miscellaneous improvements

  • Zabbix agent daemon manpage now describes the meaning of value types in -p or -t output.