Ad Widget

Collapse

History cleanup

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • Pigi_102
    Member
    • Sep 2021
    • 35

    #1

    History cleanup

    I know I'm not the first one asking, but can't find the answer.
    I have a postgresql db for my ( inherit ) zabbix server, that I'm trying to set up a bit better than my previous collegue.
    At the moment the DB is huge compared to the number of items ( almost 500 hosts, 65000 items and 35000 trigger ) and before my intervention they use to keep month and month of history as it was not clear for them the differences between history and trends.
    Now I've reduced the history retention ( in general ) to 7d and housekeeping process is doing it's work but....
    ... history table at the moment has 1.800.000.000+ rows ( that's almost two billion rows ) and in the select I'm doing, if I keep last month I should loose at least half of these.
    At the moment the housekeeping delete ~8 milion rows per cycle, but it take quite a while so is not very efficient ( in the last two days I only "lost" 22million rows ).

    Now the question: is it unsafe to manually delete the biggest part of these records to help housekeeping do his job ?
    I don't know when zabbix compute their trends, and I don't want to loose trends data ( while I want to reduce history size ).

    Thanks in advance.

    Pierluigi
  • moooola
    Junior Member
    • Jul 2024
    • 29

    #2
    Hello

    It seems that trends are not generated from history tables but from a "trend cache".
    As the documentation says, "History tables do not participate in trend generation in any way.",
    I don't think you have to worry about losing trend data.

    https://www.zabbix.com/documentation/7.0/en/manual/config/items/history_and_trends
    Last edited by moooola; 04-02-2025, 09:05.

    Comment

    • Pigi_102
      Member
      • Sep 2021
      • 35

      #3
      Nice to hear !
      I'll go with delete then !
      Housekeeping is way too slow in doing that. Ina week, on my 1.850.000.000+ rows, only 50.000.000 are gone.

      Comment

      Working...