Ad Widget

Collapse

Zabbix - Timescale and data retention

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • db100
    Member
    • Feb 2023
    • 61

    #1

    Zabbix - Timescale and data retention

    I have tried to look it up in the official doc, but there is no explicit statement about this:

    * when using timescale with zabbix, and when compression is enabled. is the history retention being applied ? i.e. are older points being removed from the compressed chunks (or maybe entire chunks being removed) ?
    * when items are deleted beacuse of LLD rules, are both history and trends cleared from the DB ? does the same hold true for items that belong to a host that is being deleted ?
    * in the meantime timescale has added a functional routine to add late arriving data to compressed chunks, and even to make bulk import using temporary tables. Is this feature planned to be integrated in zabbix ?


    Also a side question: is the trends table using timescale continuous aggregates feature? if not: why not ?
    Last edited by db100; 13-04-2023, 19:27.
  • gofree
    Senior Member
    Zabbix Certified SpecialistZabbix Certified Professional
    • Dec 2017
    • 400

    #2
    * The whole idea is to remove chunks ( older than X days ) which is faster ( drop chunk ) than housekeeper ( select everything from everything that is older) and will not kill zabbix performance in large environments as housekeeper used to ) - thats my understanding.
    * Eventually yes - not sure how it works with timescale implementation ( waiting for the chunk to be dropped ? or housekeeping is still working - perhaps somebody knows more )
    * What would be the scenario for this ? why would you like to add data to already collected data ( chunks ) ?

    In general Id say trends are pretty cheap - Trends is a built-in historical data reduction mechanism which stores minimum, maximum, average and the total number of values per every hour for numeric data types.

    Comment


    • db100
      db100 commented
      Editing a comment
      > * The whole idea is to remove chunks ( older than X days ) which is faster ( drop chunk ) than housekeeper ( select everything from everything that is older) and will not kill zabbix performance in large environments as housekeeper used to ) - thats my understanding.
      > * Eventually yes
      ok, good to know

      > What would be the scenario for this ?
      well 2 use cases:
      * the first (and less relevant IMO) are late arriving data points, or badly tuned clocks. Sometimes agents data are sent from machines whose clock is faulty and so they might be lagging behind. these data is still to be collected however.
      * The second use case is more important and it is a more general "import of historical data" for existing items. Is there a procedure in zabbix for bulk import historical data ? because if there is not, then the use case number 2 actually becomes use case number 1, since one might come up with the idea of "zabbix_send"-ing old data which currently will be discarded

      > In general Id say trends are pretty cheap - Trends is a built-in historical data reduction mechanism which stores minimum, maximum, average and the total number of values per every hour for numeric data types.

      i dont know how zabbix handles timeseries queries at the moment, but the nice thing about continuous aggregates in timescale is that one could query the aggregate and get also not aggregated data back. So basically this would mean in zabbix making a query at both trends and history table at the same time, preferring trends data where present, and using history data where not.

      Also: if late arriving history data was allowed in zabbix, and if these old data was older than the usual 90d history period (see "bulk import" above) where would this data be stored ? Will it be first stored in the history and then eventually be moved to the trends table ?
Working...