Ad Widget

Collapse

What is the best way to migrate big history tables

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • AdrianG
    Junior Member
    • May 2019
    • 1

    #1

    What is the best way to migrate big history tables

    Hi there

    Since the housekeeper process runs about 3 to 6 hours each time, we have to change something in our configuration/setup. Somehow the trends and history database tables grew to

    trends_uint 29 GB
    trends 17 GB
    history_uint 416 GB
    history_text 13 GB
    history_str 10 MB
    history_log 16 kB
    history 92 GB

    According to a lot of posts in the forum our next step should be to either partition the big tables or to use TimescaleDB.

    So far I didn't come up with a good idea on how to move the data (especially the history_uint and history table) into a partitioned table. My best idea was to somehow manage to calculate the trends data and then create the partitioned or hyper tables and start this way with the new partitioned tables.

    My question to the forum is: Is there a better solution to achieve my goal? What do I need to do, to preserve as much trend or history data as possible? How do I get housekeeper to just calculate the trend data without deleting the history (just for the time before the change to the partitioned tables?

    Our Zabbix environment has 446 hosts, 56338 items, 23689 triggers and

    Required server performance around 850. We have netwoking devices such as switches and firewalls, some SAP metrics, some special devices and mainly Linux hosts in Zabbix monitoring. The Zabbix server is a VM on VMware, RedHat 7.5 and PostgreSQL 10.5 running Zabbix 3.4.12. The PostgreSQL data files are on a NetApp All Flash storage connected over 10Gb.

    We use Zabbix for notifications and performance analysis. That is why we have a lot of items set to interval 60s. Most items are set to 1 week of history and 365d of trends.

    I appreciate all tips and tricks...
Working...