The following translations are 100% completed for this release:
Value mapping support has been added in map labels so that raw values can be represented in a human-readable way.
The performance of the dashboard System status widget has been improved by 3-7% on average for both memory usage and execution time.
Time required on 100 000 unsorted values has been reduced from 133 seconds to 2.5 seconds. This low-level improvement should help with frontend performance on large installations.
Zabbix 2.0.5 improved performance by adding a 2MB prefetch. In some cases, this could lead to worse performance, thus Zabbix 2.0.7 changes the prefetch to be row based, which would improve the performance in those edge cases.
Previously, Zabbix used a configuration cache mutex and string pool mutex. Starting with Zabbix 2.0.7, only one mutex is used. This change reduces the amount of time needed for the initial updating of the configuration cache by 20% on average.
Zabbix proxy performance has been improved by reducing the amount of database queries on the proxy side during configuration updates. Previously, configuration updates changed all fields in the records that were to be modified. This has been modified to only update the values that have changed. The improvement is important for Zabbix proxies with a large number of items (hundreds of thousands and more).
Getting detailed statistics about each CPU on Solaris systems is more efficient in Zabbix 2.0.7. This reduces CPU utilization by Zabbix agent, especially on systems with many cores.
system.swap.size calculation algorithm is changed to imitate “swap -s”.
Escalation handling has been improved in case of trigger events. Before there was a possibility that for a trigger event invalid records could be added to the escalation table, resulting in extra work on adding/deleting records, which in case of big installations could affect performance of the database.
By default now in case of a 64-bit Solaris the agent, sender and get will be compiled as 64-bit applications (previously they were compiled as 32-bit). The reason is that otherwise some functionality like
proc.mem reports wrong values for 64-bit processes.
Before during queue calculations a configuration lock was made for every active item. Now the queue is calculated without acquiring configuration locks, improving performance on systems with a large number of items.