Ad Widget

Collapse

Database grows strongly after upgrade to Zabbix 6

Collapse
This topic has been answered.
X
X
 
  • Time
  • Show
Clear All
new posts
  • d.roeper
    Junior Member
    • Feb 2022
    • 17

    #1

    Database grows strongly after upgrade to Zabbix 6

    Hello,

    i am worried that my compression is not working since i moved from Zabbix 5 to 6 / housekeeper is not deleting hist/trends.

    i am using TimeScaledb 2.3.1 in a postgresql 13.7
    history we keep 30 days
    trends 180 days
    compression is set to 7 days

    Before the move our database was about 80GB. Within 20 days it has grown by 100GB. Although we have hardly created more items / hosts. Only our 3 main proxies have been completely reinstalled. (But they got the same host as before).

    In the log of the server I see when searching for delet:
    1436:20220618:190849.539 housekeeper [deleted 0 hist/trends, 0 items/triggers, 472 events, 88 problems, 0 sessions, 0 alarms, 0 audit, 0 records in 0.057586 sec, idle for 1 hour(s)]
    only the numbers of events and problems change.

    There are three jobs in the database for compressing: (probably nothing to do with the setting in the webfrontend?!?)
    SELECT * FROM timescaledb_information.jobs;
    job_id | application_name | schedule_interval | max_runtime | max_retries | retry_period | proc_schema | proc_name | owner | scheduled | config | next_start | hypertable_schema | hypertable_name
    --------+---------------------------+-------------------+-------------+-------------+--------------+-----------------------+--------------------+----------+-----------+-------------------------------------------------+-------------------------------+-------------------+-----------------
    1008 | Compression Policy [1008] | 1 day | 00:00:00 | -1 | 01:00:00 | _timescaledb_internal | policy_compression | zabbix | t | {"hypertable_id": 15, "compress_after": 612000} | 2022-06-21 20:10:37.916419+02 | public | history_uint
    1009 | Compression Policy [1009] | 1 day | 00:00:00 | -1 | 01:00:00 | _timescaledb_internal | policy_compression | zabbix | t | {"hypertable_id": 6, "compress_after": 612000} | 2022-06-21 19:49:46.08989+02 | public | trends
    1010 | Compression Policy [1010] | 1 day | 00:00:00 | -1 | 01:00:00 | _timescaledb_internal | policy_compression | zabbix | t | {"hypertable_id": 7, "compress_after": 612000} | 2022-06-21 19:50:18.816019+02 | public | trends_uint

    Have followed the following pages:
    Update Postgresql:

    Upgrade Zabbix:
    In this tutorial we will learn how to upgrade almost any Zabbix version (4.0, 4.2, 4.4, 5.0, 5.2, 5.4) to 6.0 or 6.4 version!

    Customize database:


    Now how do I find out if this is all normal or I have an error somewhere.


    Translated with www.DeepL.com/Translator (free version)
  • Answer selected by d.roeper at 18-07-2022, 09:13.
    d.roeper
    Junior Member
    • Feb 2022
    • 17

    Hello,

    i have the solution

    Code:
    ALTER TABLE IF EXISTS history_str RENAME TO history_str_old;
    CREATE TABLE history_str (LIKE history_str_old INCLUDING DEFAULTS INCLUDING CONSTRAINTS EXCLUDING INDEXES);
    ALTER TABLE history_str ADD PRIMARY KEY (itemid,clock,ns);
    SELECT create_hypertable('history_str', 'clock', chunk_time_interval => 86400);
    INSERT INTO history_str SELECT * FROM history_str_old;
    DROP TABLE IF EXISTS history_str_old;
    CREATE INDEX history_str_1 on history_str (itemid,clock);
    ALTER TABLE history_str owner to zabbix;
    
    ALTER TABLE IF EXISTS history RENAME TO history_old;
    CREATE TABLE history (LIKE history_old INCLUDING DEFAULTS INCLUDING CONSTRAINTS EXCLUDING INDEXES);
    ALTER TABLE history ADD PRIMARY KEY (itemid,clock,ns);
    SELECT create_hypertable('history', 'clock', chunk_time_interval => 86400);
    INSERT INTO history SELECT * FROM history_old;
    DROP TABLE IF EXISTS history_old;
    CREATE INDEX history_1 on history (itemid,clock);
    ALTER TABLE history owner to zabbix;
    
    ALTER TABLE IF EXISTS history_text RENAME TO history_text_old;
    CREATE TABLE history_text (LIKE history_text_old INCLUDING DEFAULTS INCLUDING CONSTRAINTS EXCLUDING INDEXES);
    ALTER TABLE history_text ADD PRIMARY KEY (itemid,clock,ns);
    SELECT create_hypertable('history_text', 'clock', chunk_time_interval => 86400);
    INSERT INTO history_text SELECT * FROM history_text_old;
    DROP TABLE IF EXISTS history_text_old;
    CREATE INDEX history_text_1 on history_text (itemid,clock);
    ALTER TABLE history_text owner to zabbix;
    
    ALTER TABLE IF EXISTS history_log RENAME TO history_log_old;
    CREATE TABLE history_log (LIKE history_log_old INCLUDING DEFAULTS INCLUDING CONSTRAINTS EXCLUDING INDEXES);
    ALTER TABLE history_log ADD PRIMARY KEY (itemid,clock,ns);
    SELECT create_hypertable('history_log', 'clock', chunk_time_interval => 86400);
    INSERT INTO history_log SELECT * FROM history_log_old;
    DROP TABLE IF EXISTS history_log_old;
    CREATE INDEX history_log_1 on history_log (itemid,clock);
    ALTER TABLE history_log owner to zabbix;
    After that, the server log are clean and my Database are unter 120GB
    Last edited by d.roeper; 18-07-2022, 09:15.

    Comment

    • vladimir_lv
      Senior Member
      • May 2022
      • 240

      #2
      Hi!
      1. Auditlog started to grow because the audit log n Zabbix 6 now contains records about all configuration changes for all Zabbix objects, including changes that occurred as a result of executing an LLD rule, a network discovery action, an autoregistration action, or a script execution. Previously, configuration changes initiated from the Zabbix server, for example, as a result of executing a discovery rule, were not recorded. Now such object modifications will be stored as audit records attributed to the user System.
      Please check What's new in Zabbix 6.0.0 https://www.zabbix.com/documentation...on/whatsnew600

      2. Housekeeper start to delete old records, I mean the records that were created for "old" Zabbix, because why auditlog size is not decreasing as fast as expected.

      Comment

      • d.roeper
        Junior Member
        • Feb 2022
        • 17

        #3
        Hello vladimir_lv
        thank you for the quick reply.
        I had read that with the audit log, but did not expect that it grows soooo fast.
        Point 2 I understand unfortunately not quite. Can you write me maybe once differently?

        Comment

        • vladimir_lv
          Senior Member
          • May 2022
          • 240

          #4
          Ok. Your input data:
          1. History data storage period is 30 days.
          2. You made the upgrade to Zabbix 6.0 20 days ago.
          So, the task for the Housekeeper for today is: to clean all history records older than 30 days, right? But 30 days ago you had Zabbix 5.0 which does fewer records in Auditlog. Because why cleaned records size not much.

          Comment

          • d.roeper
            Junior Member
            • Feb 2022
            • 17

            #5
            oh yeah sure makes sense there.
            So if it is really just the audit log that is causing my database to grow, can I go back to my old database size if I go to General - Audit Log, turn it off and wait a day?

            but I still can't imagine that just by the audit log my database grows by 100GB from 80 to now over 170GB....
            Last edited by d.roeper; 21-06-2022, 12:26.

            Comment

            • adrian_c
              Junior Member
              • Nov 2018
              • 12

              #6
              Hi

              Have you checked the compression state of your tables, e.g. for history_uint:

              SELECT pg_size_pretty(before_compression_total_bytes) as "before compression",pg_size_pretty(after_compression_tota l_bytes) as "after compression" FROM hypertable_compression_stats('history_uint');
              before compression | after compression
              --------------------+-------------------
              1061 GB | 144 GB
              (1 row)


              Also, are you missing some compression jobs ? Here are mine:

              job_id | application_name | schedule_interval | max_runtime | max_retries | retry_period | proc_schema | proc_name | owner | scheduled | config | next_start | hypertable_schema | hypertable_name
              --------+---------------------------+-------------------+-------------+-------------+--------------+-----------------------+--------------------+----------+-----------+-------------------------------------------------+-------------------------------+-------------------+-----------------
              1005 | Compression Policy [1005] | 1 day | 00:00:00 | -1 | 01:00:00 | _timescaledb_internal | policy_compression | zabbix | t | {"hypertable_id": 6, "compress_after": 612000} | 2022-06-29 12:28:51.825835+00 | public | trends
              1006 | Compression Policy [1006] | 1 day | 00:00:00 | -1 | 01:00:00 | _timescaledb_internal | policy_compression | zabbix | t | {"hypertable_id": 7, "compress_after": 612000} | 2022-06-29 12:42:47.332505+00 | public | trends_uint
              1007 | Compression Policy [1007] | 1 day | 00:00:00 | -1 | 01:00:00 | _timescaledb_internal | policy_compression | zabbix | t | {"hypertable_id": 20, "compress_after": 612000} | 2022-06-30 10:44:36.334552+00 | public | history
              1008 | Compression Policy [1008] | 1 day | 00:00:00 | -1 | 01:00:00 | _timescaledb_internal | policy_compression | zabbix | t | {"hypertable_id": 21, "compress_after": 612000} | 2022-06-29 20:21:03.984577+00 | public | history_uint
              1009 | Compression Policy [1009] | 1 day | 00:00:00 | -1 | 01:00:00 | _timescaledb_internal | policy_compression | zabbix | t | {"hypertable_id": 24, "compress_after": 612000} | 2022-06-30 08:42:55.691618+00 | public | history_str
              1010 | Compression Policy [1010] | 1 day | 00:00:00 | -1 | 01:00:00 | _timescaledb_internal | policy_compression | zabbix | t | {"hypertable_id": 23, "compress_after": 612000} | 2022-06-30 08:50:18.94441+00 | public | history_text
              1011 | Compression Policy [1011] | 1 day | 00:00:00 | -1 | 01:00:00 | _timescaledb_internal | policy_compression | zabbix | t | {"hypertable_id": 22, "compress_after": 612000} | 2022-06-30 08:47:45.593516+00 | public | history_log
              1 | Telemetry Reporter [1] | 24:00:00 | 00:01:40 | -1 | 01:00:00 | _timescaledb_internal | policy_telemetry | postgres | t | | 2022-06-29 11:28:13.673552+00 |


              Other useful SQL commands:

              Check job stats:

              SELECT * FROM timescaledb_information.job_stats;

              Check compression stats of an individual hypertable:

              select * from hypertable_compression_stats('history_uint');

              Show all compressed chunks:

              SELECT * FROM timescaledb_information.chunks where is_compressed=true;

              Comment

              • d.roeper
                Junior Member
                • Feb 2022
                • 17

                #7
                Hi

                Thanks for your tips. I have now run your commands and it looks like I am missing a few jobs:
                history_str, text and log I don't have at all. Are they important? How do I get them installed later?

                Enclosed are my outputs:

                zabbix=# SELECT pg_size_pretty(before_compression_total_bytes) as "before compression",pg_size_pretty(after_compression_tota l_bytes) as "after compression" FROM hypertable_compression_stats('trends');

                before compression | after compression

                --------------------+-------------------

                12 GB | 1141 MB

                (1 Zeile)




                zabbix=# SELECT pg_size_pretty(before_compression_total_bytes) as "before compression",pg_size_pretty(after_compression_tota l_bytes) as "after compression" FROM hypertable_compression_stats('trends_uint');

                before compression | after compression

                --------------------+-------------------

                24 GB | 663 MB

                (1 Zeile)




                zabbix=# SELECT pg_size_pretty(before_compression_total_bytes) as "before compression",pg_size_pretty(after_compression_tota l_bytes) as "after compression" FROM hypertable_compression_stats('history');

                before compression | after compression

                --------------------+-------------------

                (0 Zeilen)




                zabbix=# SELECT pg_size_pretty(before_compression_total_bytes) as "before compression",pg_size_pretty(after_compression_tota l_bytes) as "after compression" FROM hypertable_compression_stats('history_uint');

                before compression | after compression

                --------------------+-------------------

                114 GB | 7450 MB

                (1 Zeile)




                zabbix=# SELECT pg_size_pretty(before_compression_total_bytes) as "before compression",pg_size_pretty(after_compression_tota l_bytes) as "after compression" FROM hypertable_compression_stats('history_str');

                before compression | after compression

                --------------------+-------------------

                (0 Zeilen)




                zabbix=# SELECT pg_size_pretty(before_compression_total_bytes) as "before compression",pg_size_pretty(after_compression_tota l_bytes) as "after compression" FROM hypertable_compression_stats('history_text');

                before compression | after compression

                --------------------+-------------------

                (0 Zeilen)




                zabbix=# SELECT pg_size_pretty(before_compression_total_bytes) as "before compression",pg_size_pretty(after_compression_tota l_bytes) as "after compression" FROM hypertable_compression_stats('history_log');

                before compression | after compression

                --------------------+-------------------

                (0 Zeilen)




                zabbix=# SELECT * FROM timescaledb_information.jobs;

                job_id | application_name | schedule_interval | max_runtime | max_retries | retry_period | proc_schema | proc_name | owner | scheduled | config | next_start | hypertable_schema | hypertable_name

                --------+---------------------------+-------------------+-------------+-------------+--------------+-----------------------+--------------------+----------+-----------+-------------------------------------------------+-------------------------------+-------------------+-----------------

                1014 | Compression Policy [1014] | 1 day | 00:00:00 | -1 | 01:00:00 | _timescaledb_internal | policy_compression | zabbix | t | {"hypertable_id": 15, "compress_after": 612000} | 2022-06-30 10:36:22.496732+02 | public | history_uint

                1015 | Compression Policy [1015] | 1 day | 00:00:00 | -1 | 01:00:00 | _timescaledb_internal | policy_compression | zabbix | t | {"hypertable_id": 6, "compress_after": 612000} | 2022-06-30 10:25:35.169235+02 | public | trends

                1016 | Compression Policy [1016] | 1 day | 00:00:00 | -1 | 01:00:00 | _timescaledb_internal | policy_compression | zabbix | t | {"hypertable_id": 7, "compress_after": 612000} | 2022-06-30 10:25:35.170909+02 | public | trends_uint

                1 | Telemetry Reporter [1] | 24:00:00 | 00:01:40 | -1 | 01:00:00 | _timescaledb_internal | policy_telemetry | postgres | t | | 2022-06-29 23:13:02.492815+02 | |

                (4 Zeilen)




                zabbix=# SELECT * FROM timescaledb_information.job_stats;

                hypertable_schema | hypertable_name | job_id | last_run_started_at | last_successful_finish | last_run_status | job_status | last_run_duration | next_start | total_runs | total_successes | total_failures

                -------------------+-----------------+--------+-------------------------------+-------------------------------+-----------------+------------+-------------------+-------------------------------+------------+-----------------+----------------

                public | history_uint | 1014 | 2022-06-29 10:35:12.159341+02 | 2022-06-29 10:36:22.496732+02 | Success | Scheduled | 00:01:10.337391 | 2022-06-30 10:36:22.496732+02 | 9 | 9 | 0

                public | trends | 1015 | 2022-06-29 10:25:35.159506+02 | 2022-06-29 10:25:35.169235+02 | Success | Scheduled | 00:00:00.009729 | 2022-06-30 10:25:35.169235+02 | 9 | 9 | 0

                public | trends_uint | 1016 | 2022-06-29 10:25:35.158517+02 | 2022-06-29 10:25:35.170909+02 | Success | Scheduled | 00:00:00.012392 | 2022-06-30 10:25:35.170909+02 | 9 | 9 | 0

                | | 1 | 2022-06-29 13:22:21.403231+02 | 2022-06-26 17:03:59.762588+02 | Failed | Scheduled | 00:00:03.589584 | 2022-06-29 23:13:02.492815+02 | 471 | 418 | 53

                (4 Zeilen)




                zabbix=# select * from hypertable_compression_stats('history_uint');

                total_chunks | number_compressed_chunks | before_compression_table_bytes | before_compression_index_bytes | before_compression_toast_bytes | before_compression_total_bytes | after_compression_table_bytes | after_compression_index_bytes | after_compression_toast_bytes | after_compression_total_bytes | node_name

                --------------+--------------------------+--------------------------------+--------------------------------+--------------------------------+--------------------------------+-------------------------------+-------------------------------+-------------------------------+-------------------------------+-----------

                31 | 23 | 61478158336 | 61361946624 | 0 | 122840104960 | 439164928 | 68591616 | 7304536064 | 7812292608 |

                (1 Zeile)




                zabbix=# SELECT * FROM timescaledb_information.chunks where is_compressed=true;

                hypertable_schema | hypertable_name | chunk_schema | chunk_name | primary_dimension | primary_dimension_type | range_start | range_end | range_start_integer | range_end_integer | is_compressed | chunk_tablespace | data_nodes

                -------------------+-----------------+-----------------------+----------------------+-------------------+------------------------+-------------+-----------+---------------------+-------------------+---------------+------------------+------------

                public | trends_uint | _timescaledb_internal | _hyper_7_2324_chunk | clock | integer | | | 1640736000 | 1643328000 | t | |

                public | trends | _timescaledb_internal | _hyper_6_2325_chunk | clock | integer | | | 1640736000 | 1643328000 | t | |

                public | trends_uint | _timescaledb_internal | _hyper_7_2628_chunk | clock | integer | | | 1643328000 | 1645920000 | t | |

                public | trends | _timescaledb_internal | _hyper_6_2629_chunk | clock | integer | | | 1643328000 | 1645920000 | t | |

                public | trends_uint | _timescaledb_internal | _hyper_7_2931_chunk | clock | integer | | | 1645920000 | 1648512000 | t | |

                public | trends | _timescaledb_internal | _hyper_6_2932_chunk | clock | integer | | | 1645920000 | 1648512000 | t | |

                public | trends | _timescaledb_internal | _hyper_6_3236_chunk | clock | integer | | | 1648512000 | 1651104000 | t | |

                public | trends_uint | _timescaledb_internal | _hyper_7_3237_chunk | clock | integer | | | 1648512000 | 1651104000 | t | |

                public | trends | _timescaledb_internal | _hyper_6_3541_chunk | clock | integer | | | 1651104000 | 1653696000 | t | |

                public | trends_uint | _timescaledb_internal | _hyper_7_3542_chunk | clock | integer | | | 1651104000 | 1653696000 | t | |

                public | history_uint | _timescaledb_internal | _hyper_15_3918_chunk | clock | integer | | | 1653868800 | 1653955200 | t | |

                public | history_uint | _timescaledb_internal | _hyper_15_3919_chunk | clock | integer | | | 1653955200 | 1654041600 | t | |

                public | history_uint | _timescaledb_internal | _hyper_15_3920_chunk | clock | integer | | | 1654041600 | 1654128000 | t | |

                public | history_uint | _timescaledb_internal | _hyper_15_3945_chunk | clock | integer | | | 1654128000 | 1654214400 | t | |

                public | history_uint | _timescaledb_internal | _hyper_15_3947_chunk | clock | integer | | | 1654214400 | 1654300800 | t | |

                public | history_uint | _timescaledb_internal | _hyper_15_3949_chunk | clock | integer | | | 1654300800 | 1654387200 | t | |

                public | history_uint | _timescaledb_internal | _hyper_15_3953_chunk | clock | integer | | | 1654387200 | 1654473600 | t | |

                public | history_uint | _timescaledb_internal | _hyper_15_3955_chunk | clock | integer | | | 1654473600 | 1654560000 | t | |

                public | history_uint | _timescaledb_internal | _hyper_15_3957_chunk | clock | integer | | | 1654560000 | 1654646400 | t | |

                public | history_uint | _timescaledb_internal | _hyper_15_3959_chunk | clock | integer | | | 1654646400 | 1654732800 | t | |

                public | history_uint | _timescaledb_internal | _hyper_15_3961_chunk | clock | integer | | | 1654732800 | 1654819200 | t | |

                public | history_uint | _timescaledb_internal | _hyper_15_3963_chunk | clock | integer | | | 1654819200 | 1654905600 | t | |

                public | history_uint | _timescaledb_internal | _hyper_15_3965_chunk | clock | integer | | | 1654905600 | 1654992000 | t | |

                public | history_uint | _timescaledb_internal | _hyper_15_3967_chunk | clock | integer | | | 1654992000 | 1655078400 | t | |

                public | history_uint | _timescaledb_internal | _hyper_15_3969_chunk | clock | integer | | | 1655078400 | 1655164800 | t | |

                public | history_uint | _timescaledb_internal | _hyper_15_3971_chunk | clock | integer | | | 1655164800 | 1655251200 | t | |

                public | history_uint | _timescaledb_internal | _hyper_15_3973_chunk | clock | integer | | | 1655251200 | 1655337600 | t | |

                public | history_uint | _timescaledb_internal | _hyper_15_3975_chunk | clock | integer | | | 1655337600 | 1655424000 | t | |

                public | history_uint | _timescaledb_internal | _hyper_15_3977_chunk | clock | integer | | | 1655424000 | 1655510400 | t | |

                public | history_uint | _timescaledb_internal | _hyper_15_3979_chunk | clock | integer | | | 1655510400 | 1655596800 | t | |

                public | history_uint | _timescaledb_internal | _hyper_15_3981_chunk | clock | integer | | | 1655596800 | 1655683200 | t | |

                public | history_uint | _timescaledb_internal | _hyper_15_3983_chunk | clock | integer | | | 1655683200 | 1655769600 | t | |

                public | history_uint | _timescaledb_internal | _hyper_15_3985_chunk | clock | integer | | | 1655769600 | 1655856000 | t | |

                (33 Zeilen)

                Comment

                • adrian_c
                  Junior Member
                  • Nov 2018
                  • 12

                  #8
                  Timescale compression jobs are setup automatically as long as "Enable Compression" is selected for "History and trends compression" in General\Housekeeping. These fields are checked during zabbix server startup, so I would suggest either restarting zabbix server and monitor the server log for errors during startup or (if available) look at your server log from when zabbix server was last started.

                  Comment

                  • d.roeper
                    Junior Member
                    • Feb 2022
                    • 17

                    #9
                    Hello

                    I actually seem to have problems with the database. See log extract:

                    ...
                    2765892:20220701:085846.124 server En français started [trapper #2]
                    2765871:20220701:085846.125 [Z3005] query failed: [0] PGRES_FATAL_ERROR:FEHLER: table "history" is not a hypertable
                    [select set_integer_now_func('history', 'zbx_ts_unix_now', true)]
                    2765893:20220701:085846.125 server 中文论&#22363 ; started [trapper #3]
                    2765901:20220701:085846.127 server zabbix_sender as daemon started [history poller #2]
                    2765894:20220701:085846.134 server ZABBIX forum operates in test mode started [trapper #4]
                    2765874:20220701:085846.138 server Zabbix Discussions and Feedback started [http poller #1]
                    2765871:20220701:085846.139 [Z3005] query failed: [0] PGRES_FATAL_ERROR:FEHLER: unbekannter Parameter-Namensraum »timescaledb«
                    [alter table history set (timescaledb.compress,timescaledb.compress_segment by='itemid',timescaledb.compress_orderby='clock,ns ')]
                    2765867:20220701:085846.143 server #10 started [preprocessing worker #3]
                    2765871:20220701:085846.148 [Z3005] query failed: [0] PGRES_FATAL_ERROR:FEHLER: table "history" is not a hypertable
                    [select add_compression_policy('history', integer '612000')]
                    2765871:20220701:085846.149 failed to add compression policy to table 'history'
                    2765924:20220701:085846.149 server New Install... Error loading started [availability manager #1]
                    2765911:20220701:085846.150 server In search of a Processer Load Explanation started [history poller #3]
                    2765866:20220701:085846.151 server Albums started [preprocessing worker #2]
                    2765926:20220701:085846.153 server Create an item using existing items started [odbc poller #1]
                    2765925:20220701:085846.153 server Zabbix Agent is not accessible started [trigger housekeeper #1]
                    2765923:20220701:085846.157 server Zabbix Server Port to Windows started [history poller #5]
                    2765871:20220701:085846.161 [Z3005] query failed: [0] PGRES_FATAL_ERROR:FEHLER: table "history_str" is not a hypertable
                    [select set_integer_now_func('history_str', 'zbx_ts_unix_now', true)]
                    2765871:20220701:085846.161 [Z3005] query failed: [0] PGRES_FATAL_ERROR:FEHLER: unbekannter Parameter-Namensraum »timescaledb«
                    [alter table history_str set (timescaledb.compress,timescaledb.compress_segment by='itemid',timescaledb.compress_orderby='clock,ns ')]
                    2765871:20220701:085846.162 [Z3005] query failed: [0] PGRES_FATAL_ERROR:FEHLER: table "history_str" is not a hypertable
                    [select add_compression_policy('history_str', integer '612000')]
                    2765871:20220701:085846.162 failed to add compression policy to table 'history_str'
                    2765871:20220701:085846.162 [Z3005] query failed: [0] PGRES_FATAL_ERROR:FEHLER: table "history_text" is not a hypertable
                    [select set_integer_now_func('history_text', 'zbx_ts_unix_now', true)]
                    2765871:20220701:085846.162 [Z3005] query failed: [0] PGRES_FATAL_ERROR:FEHLER: unbekannter Parameter-Namensraum »timescaledb«
                    [alter table history_text set (timescaledb.compress,timescaledb.compress_segment by='itemid',timescaledb.compress_orderby='clock,ns ')]
                    2765871:20220701:085846.163 [Z3005] query failed: [0] PGRES_FATAL_ERROR:FEHLER: table "history_text" is not a hypertable
                    [select add_compression_policy('history_text', integer '612000')]
                    2765871:20220701:085846.163 failed to add compression policy to table 'history_text'
                    2765871:20220701:085846.163 [Z3005] query failed: [0] PGRES_FATAL_ERROR:FEHLER: table "history_log" is not a hypertable
                    [select set_integer_now_func('history_log', 'zbx_ts_unix_now', true)]
                    2765871:20220701:085846.163 [Z3005] query failed: [0] PGRES_FATAL_ERROR:FEHLER: unbekannter Parameter-Namensraum »timescaledb«
                    [alter table history_log set (timescaledb.compress,timescaledb.compress_segment by='itemid',timescaledb.compress_orderby='clock,ns ')]
                    2765871:20220701:085846.163 [Z3005] query failed: [0] PGRES_FATAL_ERROR:FEHLER: table "history_log" is not a hypertable
                    [select add_compression_policy('history_log', integer '612000')]
                    2765871:20220701:085846.163 failed to add compression policy to table 'history_log'
                    ...

                    Before I play around with it now without knowing exactly what I have to do.
                    How do I get the tables repaired? Preferably without data loss.

                    Comment

                    • adrian_c
                      Junior Member
                      • Nov 2018
                      • 12

                      #10
                      Hi

                      Looks like either the script /usr/share/doc/zabbix-sql-scripts/postgresql/timescaledb.sql didn't complete properly when the database was setup with timescaledb, or the history table(s) were recreated after this script was run. This script creates the hypertables e.g.:

                      PERFORM create_hypertable('history', 'clock', chunk_time_interval => 86400, migrate_data => true);
                      PERFORM create_hypertable('history_uint', 'clock', chunk_time_interval => 86400, migrate_data => true);
                      PERFORM create_hypertable('history_log', 'clock', chunk_time_interval => 86400, migrate_data => true);
                      PERFORM create_hypertable('history_text', 'clock', chunk_time_interval => 86400, migrate_data => true);
                      PERFORM create_hypertable('history_str', 'clock', chunk_time_interval => 86400, migrate_data => true);
                      PERFORM create_hypertable('trends', 'clock', chunk_time_interval => 2592000, migrate_data => true);
                      PERFORM create_hypertable('trends_uint', 'clock', chunk_time_interval => 2592000, migrate_data => true);


                      Adrian

                      Comment

                      • d.roeper
                        Junior Member
                        • Feb 2022
                        • 17

                        #11
                        Hi

                        So we had a Zabbix 5LTS at the time, which I then upgraded to 6 LTS.
                        Maybe I did something in the wrong order.

                        So proceeding now would probably be.
                        1. stop Zabbix services
                        2. execute your lines in the Zabbix database:
                        PERFORM create_hypertable('history', 'clock', chunk_time_interval => 86400, migrate_data => true);
                        PERFORM create_hypertable('history_uint', 'clock', chunk_time_interval => 86400, migrate_data => true);
                        PERFORM create_hypertable('history_log', 'clock', chunk_time_interval => 86400, migrate_data => true);
                        PERFORM create_hypertable('history_text', 'clock', chunk_time_interval => 86400, migrate_data => true);
                        PERFORM create_hypertable('history_str', 'clock', chunk_time_interval => 86400, migrate_data => true);
                        PERFORM create_hypertable('trends', 'clock', chunk_time_interval => 2592000, migrate_data => true);
                        PERFORM create_hypertable('trends_uint', 'clock', chunk_time_interval => 2592000, migrate_data => true);
                        3. start Zabbix services again. And see if there are any other errors?

                        When I run the lines, does it compress right then, or does it do that on the next normal compression run? Are these short commands, or can it take a few minutes after a command or two before I get feedback?


                        Last edited by d.roeper; 01-07-2022, 09:21.

                        Comment

                        • adrian_c
                          Junior Member
                          • Nov 2018
                          • 12

                          #12
                          Hi

                          Sorry, but I have never converted a table that already contains data into a hypertable, but according to the timescaledb document https://docs.timescale.com/api/lates...ate-hypertable, there is a "migrate_data" option which will migrate any existing data to chunks in the hypertable but large tables will take a significant amount of time to migrate. When I implemented timescaledb for our Zabbix database I started with empty history and trends tables as I also migrated from MySQL and didn't want to migrate 850GB of history data.

                          Maybe someone else on here can help you, it might be worth posting another question in the this forum asking how to convert existing history tables to timescaledb.

                          Adrian

                          Comment

                          • d.roeper
                            Junior Member
                            • Feb 2022
                            • 17

                            #13
                            Hello,

                            I have found a way. Will implement the days times with us and report back then

                            Comment

                            • d.roeper
                              Junior Member
                              • Feb 2022
                              • 17

                              #14
                              Hello,

                              sorry that it took a little longer.
                              My idea I had, unfortunately, also did not quite work.
                              Enclosed first of all my commands and outputs:

                              Code:
                              zabbix=# ALTER TABLE history_str RENAME TO history_str_old;
                              ALTER TABLE
                              
                              zabbix=# CREATE TABLE history_str (
                              zabbix=# itemid bigint NOT NULL,
                              zabbix=# clock integer DEFAULT '0' NOT NULL,
                              zabbix=# value varchar(255) DEFAULT '' NOT NULL,
                              zabbix=# ns integer DEFAULT '0' NOT NULL,
                              zabbix=# PRIMARY KEY (itemid,clock,ns)
                              zabbix=# );
                              CREATE TABLE
                              
                              zabbix=# \copy (select * from history_str_old) TO '/tmp/history_str.csv' DELIMITER ',' CSV
                              COPY 540295
                              
                              zabbix=# CREATE TABLE temp_history_str (
                              zabbix=# itemid bigint NOT NULL,
                              zabbix=# clock integer DEFAULT '0' NOT NULL,
                              zabbix=# value varchar(255) DEFAULT '' NOT NULL,
                              zabbix=# ns integer DEFAULT '0' NOT NULL,
                              zabbix=# PRIMARY KEY (itemid,clock,ns)
                              zabbix=# );
                              CREATE TABLE
                              
                              zabbix=# \copy temp_history_str FROM '/tmp/history_str.csv' DELIMITER ',' CSV
                              COPY 540295
                              
                              zabbix=# select create_hypertable('history_str', 'clock', chunk_time_interval => 86400, migrate_data => true);
                              create_hypertable
                              ---------------------------
                              (17,public,history_str,t)
                              (1 Zeile)
                              
                              zabbix=# INSERT INTO history_str SELECT * FROM temp_history_str ON CONFLICT (itemid,clock,ns) DO NOTHING;
                              INSERT 0 540295
                              
                              zabbix=# select set_integer_now_func('history_str', 'zbx_ts_unix_now', true);
                              set_integer_now_func
                              ----------------------
                              
                              (1 Zeile)
                              
                              zabbix=# alter table history_str set (timescaledb.compress,timescaledb.compress_segment by='itemid',timescaledb.compress_orderby='clock,ns ');
                              ALTER TABLE
                              
                              zabbix=# select add_compression_policy('history_str', (
                              zabbix=# select extract(epoch from (config::json->>'compress_after')::interval) from timescaledb_information.jobs where application_name like 'Compression%%' and hypertable_schema='public' and hypertable_name='history_str_old'
                              zabbix=# )::integer
                              zabbix=# );
                              add_compression_policy
                              ------------------------
                              
                              (1 Zeile)
                              
                              
                              zabbix=# select alter_job((select job_id from timescaledb_information.jobs where hypertable_schema='public' and hypertable_name='history_str'), scheduled => true);
                              FEHLER: job ID cannot be NULL
                              Think the 1st part with exporting and importing to temp is totally unnecessary. Probably the history_str in the Zabbix5 version is structured differently.
                              I have completely forgotten during the migration, so it was empty when starting Zabbix6 the first time.
                              We also have no problem starting again with an empty table. Only this time it should be a "hypertable" and compressible.

                              How do I create a new empty history_str table, make it a hypertable table and create the timescaldb job?

                              Comment

                              • d.roeper
                                Junior Member
                                • Feb 2022
                                • 17

                                #15
                                Hello,

                                I may have a solution, in any case it has now led to a table that this in the timescaledb_information.jobs; and there is no error in the server log anymore:

                                Code:
                                CREATE TABLE history_str_new (LIKE history_str INCLUDING DEFAULTS INCLUDING CONSTRAINTS EXCLUDING INDEXES);
                                SELECT create_hypertable('history_str_new', 'clock', chunk_time_interval => 86400);
                                INSERT INTO history_str_new SELECT * FROM history_str;
                                DROP TABLE IF EXISTS history_str;
                                ALTER TABLE IF EXISTS history_str_new RENAME TO history_str;
                                CREATE INDEX history_str_1 on history_str (itemid,clock);
                                ALTER TABLE history_str owner to zabbix;
                                SELECT * FROM timescaledb_information.jobs;
                                job_id | application_name | schedule_interval | max_runtime | max_retries | retry_period | proc_schema | proc_name | owner | scheduled | config | next_start | hypertable_schema | hypertable_name
                                --------+---------------------------+-------------------+-------------+-------------+--------------+-----------------------+--------------------+----------+-----------+-------------------------------------------------+-------------------------------+-------------------+-----------------
                                1017 | Compression Policy [1017] | 1 day | 00:00:00 | -1 | 01:00:00 | _timescaledb_internal | policy_compression | zabbix | t | {"hypertable_id": 17, "compress_after": 612000} | 2022-07-12 12:47:36.212216+02 | public | history_str
                                1014 | Compression Policy [1014] | 1 day | 00:00:00 | -1 | 01:00:00 | _timescaledb_internal | policy_compression | zabbix | t | {"hypertable_id": 15, "compress_after": 612000} | 2022-07-12 10:20:57.633263+02 | public | history_uint
                                1015 | Compression Policy [1015] | 1 day | 00:00:00 | -1 | 01:00:00 | _timescaledb_internal | policy_compression | zabbix | t | {"hypertable_id": 6, "compress_after": 612000} | 2022-07-12 10:16:45.01567+02 | public | trends
                                1016 | Compression Policy [1016] | 1 day | 00:00:00 | -1 | 01:00:00 | _timescaledb_internal | policy_compression | zabbix | t | {"hypertable_id": 7, "compress_after": 612000} | 2022-07-12 10:16:45.016298+02 | public | trends_uint
                                1 | Telemetry Reporter [1] | 24:00:00 | 00:01:40 | -1 | 01:00:00 | _timescaledb_internal | policy_telemetry | postgres | t | | 2022-07-11 22:05:33.679273+02 | |

                                could this be the right way?

                                Comment

                                Working...