Ad Widget

Collapse

Migrate Zabbix PostgreSQL TimescaleDB to New Server

Collapse
This topic has been answered.
X
X
 
  • Time
  • Show
Clear All
new posts
  • dieselfluxcapacitor
    Junior Member
    • Jul 2023
    • 11

    #1

    Migrate Zabbix PostgreSQL TimescaleDB to New Server

    I am trying to understand the correct way to migrate my old Zabbix PostgreSQL database to a new PostgreSQL server. My attempts to migrate the database have not been successful and I anticipate it has to do with TImescaleDB.

    ENVIRONMENT:
    SERVER A (old database server)
    • Zabbix 6.4
    • Ubuntu 20.04 LTS
    • PostgreSQL 15.5
    • TimescaleDB 2.11
    SERVER B (new database server)
    • Zabbix 7.0
    • Ubuntu 24.04 LTS
    • PostgreSQL 16.6
    • TimescaleDB 2.17.2
    I've attempted to use pg_dump and pg_restore
    Code:
    sudo -u postgres pg_dump -Fc -d zabbix | gzip > zabbix_backup_dump.sql.gz;
    zcat /mnt/data/zabbix_backup_dump.sql.gz | sudo -u postgres pg_restore -d zabbix​;
    however I receive numerous warnings about hypertables not being able to be restored
    Code:
    pg_restore: error: could not execute query: ERROR:  column "replication_factor" of relation "hypertable" does not exist
    Command was: COPY _timescaledb_catalog.hypertable (id, schema_name, table_name, associated_schema_name, associated_table_prefix, num_dimensions, chunk_sizing_func_schema, chunk_sizing_func_name, chunk_target_size, compression_state, compressed_hypertable_id, replication_factor) FROM stdin;
    pg_restore: error: COPY failed for table "chunk": ERROR:  null value in column "creation_time" of relation "chunk" violates not-null constraint
    DETAIL:  Failing row contains (4101, 8, _timescaledb_internal, compress_hyper_8_4101_chunk, null, f, 0, f, null).
    CONTEXT:  COPY chunk, line 1: "4101     8       _timescaledb_internal   compress_hyper_8_4101_chunk     \N   f"
    pg_restore: error: COPY failed for table "dimension": ERROR:  insert or update on table "dimension" violates foreign key constraint "dimension_hypertable_id_fkey"
    DETAIL:  Key (hypertable_id)=(1) is not present in table "hypertable".
    pg_restore: error: COPY failed for table "dimension_slice": ERROR:  insert or update on table "dimension_slice" violates foreign key constraint "dimension_slice_dimension_id_fkey"
    DETAIL:  Key (dimension_id)=(1) is not present in table "dimension".
    pg_restore: error: COPY failed for table "chunk_constraint": ERROR:  insert or update on table "chunk_constraint" violates foreign key constraint "chunk_constraint_chunk_id_fkey"
    DETAIL:  Key (chunk_id)=(4113) is not present in table "chunk".
    pg_restore: error: could not execute query: ERROR:  relation "_timescaledb_catalog.chunk_data_node" does not exist
    Command was: COPY _timescaledb_catalog.chunk_data_node (chunk_id, node_chunk_id, node_name) FROM stdin;
    pg_restore: error: COPY failed for table "chunk_index": ERROR:  insert or update on table "chunk_index" violates foreign key constraint "chunk_index_chunk_id_fkey"
    DETAIL:  Key (chunk_id)=(3804) is not present in table "chunk".
    pg_restore: error: COPY failed for table "compression_chunk_size": ERROR:  insert or update on table "compression_chunk_size" violates foreign key constraint "compression_chunk_size_chunk_id_fkey"
    DETAIL:  Key (chunk_id)=(3792) is not present in table "chunk".
    pg_restore: error: could not execute query: ERROR:  column "bucket_width" of relation "continuous_agg" does not exist
    Command was: COPY _timescaledb_catalog.continuous_agg (mat_hypertable_id, raw_hypertable_id, parent_mat_hypertable_id, user_view_schema, user_view_name, partial_view_schema, partial_view_name, bucket_width, direct_view_schema, direct_view_name, materialized_only, finalized) FROM stdin;
    pg_restore: error: could not execute query: ERROR:  column "experimental" of relation "continuous_aggs_bucket_function" does not exist
    Command was: COPY _timescaledb_catalog.continuous_aggs_bucket_function (mat_hypertable_id, experimental, name, bucket_width, origin, timezone) FROM stdin;
    pg_restore: error: could not execute query: ERROR:  relation "_timescaledb_catalog.dimension_partition" does not exist
    Command was: COPY _timescaledb_catalog.dimension_partition (dimension_id, range_start, data_nodes) FROM stdin;
    pg_restore: error: could not execute query: ERROR:  relation "_timescaledb_catalog.hypertable_compression" does not exist
    Command was: COPY _timescaledb_catalog.hypertable_compression (hypertable_id, attname, compression_algorithm_id, segmentby_column_index, orderby_column_index, orderby_asc, orderby_nullsfirst) FROM stdin;
    pg_restore: error: could not execute query: ERROR:  relation "_timescaledb_catalog.hypertable_data_node" does not exist
    Command was: COPY _timescaledb_catalog.hypertable_data_node (hypertable_id, node_hypertable_id, node_name, block_chunks) FROM stdin;
    pg_restore: error: could not execute query: ERROR:  relation "_timescaledb_catalog.remote_txn" does not exist
    Command was: COPY _timescaledb_catalog.remote_txn (data_node_name, remote_transaction_id) FROM stdin;
    pg_restore: error: COPY failed for table "bgw_job": ERROR:  insert or update on table "bgw_job" violates foreign key constraint "bgw_job_hypertable_id_fkey"
    DETAIL:  Key (hypertable_id)=(1) is not present in table "hypertable".​
    Is there a step that I am missing? Or a parameter that I haven't set in the backup invocation? Absolutely stumped on this and I would appreciate any help anyone can provide!!
  • Answer selected by dieselfluxcapacitor at 17-12-2024, 01:01.
    dieselfluxcapacitor
    Junior Member
    • Jul 2023
    • 11

    cyber thanks for the guidance. I did some more research and testing and I think I've ultimately solved the issue. The steps I took are outlined below (in case others run into the same trouble). What I ultimately found is that the version of TimescaleDB and PostgreSQL need to match between the old and new servers. Even after upgrading the TimescaleDB plugin on the current production database, I couldn't successfully restore a backup to the PostgreSQL 17 instance on the new server. I had to install PostgreSQL 15, restore the database, and then upgrade in-place to PostgreSQL 17.
    1. Captured snapshot of current production database server VM
    2. Stop the Zabbix server service
    3. Queried the current TimescaleDB plugin version
      Code:
      sudo -u postgres psql -c "\dx timescaledb;"
      	
      	List of installed extensions
      	Name | Version | Schema | Description
      	-------------+---------+--------+---------------------------------------------------------------------------------------
      	timescaledb | 2.11.0 | public | Enables scalable inserts and complex queries for time-series data (Community Edition)
      	(1 row)​
    4. Upgrade the TimescaleDB packages on the server (assuming the source has already been installed and has been refreshed in the apt sources recently)
      Code:
      sudo apt upgrade -y timescaledb-2-postgresql-15 timescaledb-2-loader-postgresql-15 timescaledb-toolkit-postgresql-15 timescaledb-tools​
    5. Configure the PostgreSQL database to use the new version of the TimescaleDB plugin
      Code:
      sudo -u postgres psql -c "ALTER EXTENSION timescaledb UPDATE TO '2.17.2';"
    6. Re-ran the TimescaleDB tuning script
      Code:
      sudo timescaledb-tune --yes
    7. Restart the PostgreSQL database server
      Code:
      sudo systemctl restart postgresql
    8. Confirm the TimescaleDB database plugin has been updated to the new version
      Code:
      sudo -u postgres psql -c "\dx timescaledb;"
      	
      	List of installed extensions
      	Name | Version | Schema | Description
      	-------------+---------+--------+---------------------------------------------------------------------------------------
      	timescaledb | 2.17.2 | public | Enables scalable inserts and complex queries for time-series data (Community Edition)
      	(1 row)​
    9. Restart the Zabbix server service
      Code:
      sudo systemctl start zabbix-server.service;
    10. Review the following log files to determine if there were any startup errors
      1. /var/log/postgresql/postgresql-15-main.log
      2. /var/log/zabbix/zabbix_server.log
    11. Backup the current production database using pg_dump and custom formatting
      Code:
      sudo -u postgres pg_dump -Fc zabbix | gzip > zabbix_backup_dump.sql.gz;
    12. Then. restore to the new database server (with the same version of TimescaleDB installed) from the backup
      Code:
      zcat /mnt/data/zabbix_backup_dump.sql.gz | sudo -u postgres pg_restore -d zabbix;
    From this point, I was able to simply follow the PostgreSQL instructions for performing an in-place upgrade using pg_upgrade.

    Comment


    • dieselfluxcapacitor
      dieselfluxcapacitor commented
      Editing a comment
      Note that there should be a '>' character between 'gzip' and 'zabbix_backup_dump.sql.gz;'. This must have been translated when copy/paste into the Zabbix forum
  • cyber
    Senior Member
    Zabbix Certified SpecialistZabbix Certified Professional
    • Dec 2006
    • 4806

    #2
    First of all, I dont think you should use dump and restore with different versions (15 vs16).
    Second... https://docs.timescale.com/self-host...es/upgrade-pg/

    Comment

    • dieselfluxcapacitor
      Junior Member
      • Jul 2023
      • 11

      #3
      cyber thanks for the reply!

      I installed PostgreSQL 15 on the new database server and did a database backup / restore using the following commands
      Code:
      # Backup on old database
      sudo -u postgres pg_dump -Fc -dzabbix > zabbix_Fc_12042024.bak
      
      # Restore on new database
      sudo -u postgres createuser --pwprompt zabbix;
      sudo -u postgres createdb -O zabbix zabbix;
      sudo -u postgres psql -c "SET client_encoding =\
          'UTF8'; UPDATE pg_database SET datcollate='en_US.UTF-8', datctype='en_US.UTF-8' WHERE datname='zabbix';\
           update pg_database set encoding = pg_char_to_encoding('UTF8') where datname = 'zabbix' ;"
      echo "CREATE EXTENSION IF NOT EXISTS timescaledb CASCADE;" | sudo -u postgres psql zabbix;
      sudo -u postgres pg_restore -d zabbix < /mnt/backup/zabbix_Fc_12042024.bak;
      cat /usr/share/zabbix-sql-scripts/postgresql/timescaledb/schema.sql | sudo -u zabbix psql zabbix;
      The storm of restore errors has subsided; however, upon trying to run the Zabbix TimescaleDB script, I get this error as a result:
      Code:
      ERROR:  must be owner of function cuid_timestamp
      NOTICE:  function base36_decode(pg_catalog.varchar) does not exist, skipping
      DROP FUNCTION
      NOTICE:  PostgreSQL version 15.10 (Ubuntu 15.10-1.pgdg24.04+1) is valid
      NOTICE:  TimescaleDB extension is detected
      NOTICE:  TimescaleDB version 2.17.2 is valid
      ERROR:  table "history" is already partitioned
      DETAIL:  It is not possible to turn tables that use inheritance into hypertables.
      CONTEXT:  SQL statement "SELECT create_hypertable('history', 'clock', chunk_time_interval => 86400, migrate_data => true, if_not_exists => true)"
      PL/pgSQL function inline_code_block line 64 at PERFORM
      Further, when I log into the GUI, I am still only seeing the last 30 days of data. The historical data from the last year is not being restored.

      PostgreSQL 15 and TimescaleDB 2.17 are both compatible with Zabbix 7.0 LTS so I'm not sure what the issue is?
      Last edited by dieselfluxcapacitor; 06-12-2024, 18:44.

      Comment

      • cyber
        Senior Member
        Zabbix Certified SpecialistZabbix Certified Professional
        • Dec 2006
        • 4806

        #4
        I am no DBA whatsoever... But what I would do after creating a user zabbix and DB zabbix.. I would not insist running everything (scripts etc) as user postgres. It will screw up db and table ownerships at one point... and then you get errors...
        And shouldn't you apply timescale schema before restoring?

        When I did my last upgrade (with platform and DB version change etc) I exported data from DB to CSV files and imported it from those... All db creation and TS stuff was already applied, so everythign inserted from csv found its natural place in DB..

        Comment

        • dieselfluxcapacitor
          Junior Member
          • Jul 2023
          • 11

          #5
          cyber thanks for the guidance. I did some more research and testing and I think I've ultimately solved the issue. The steps I took are outlined below (in case others run into the same trouble). What I ultimately found is that the version of TimescaleDB and PostgreSQL need to match between the old and new servers. Even after upgrading the TimescaleDB plugin on the current production database, I couldn't successfully restore a backup to the PostgreSQL 17 instance on the new server. I had to install PostgreSQL 15, restore the database, and then upgrade in-place to PostgreSQL 17.
          1. Captured snapshot of current production database server VM
          2. Stop the Zabbix server service
          3. Queried the current TimescaleDB plugin version
            Code:
            sudo -u postgres psql -c "\dx timescaledb;"
            	
            	List of installed extensions
            	Name | Version | Schema | Description
            	-------------+---------+--------+---------------------------------------------------------------------------------------
            	timescaledb | 2.11.0 | public | Enables scalable inserts and complex queries for time-series data (Community Edition)
            	(1 row)​
          4. Upgrade the TimescaleDB packages on the server (assuming the source has already been installed and has been refreshed in the apt sources recently)
            Code:
            sudo apt upgrade -y timescaledb-2-postgresql-15 timescaledb-2-loader-postgresql-15 timescaledb-toolkit-postgresql-15 timescaledb-tools​
          5. Configure the PostgreSQL database to use the new version of the TimescaleDB plugin
            Code:
            sudo -u postgres psql -c "ALTER EXTENSION timescaledb UPDATE TO '2.17.2';"
          6. Re-ran the TimescaleDB tuning script
            Code:
            sudo timescaledb-tune --yes
          7. Restart the PostgreSQL database server
            Code:
            sudo systemctl restart postgresql
          8. Confirm the TimescaleDB database plugin has been updated to the new version
            Code:
            sudo -u postgres psql -c "\dx timescaledb;"
            	
            	List of installed extensions
            	Name | Version | Schema | Description
            	-------------+---------+--------+---------------------------------------------------------------------------------------
            	timescaledb | 2.17.2 | public | Enables scalable inserts and complex queries for time-series data (Community Edition)
            	(1 row)​
          9. Restart the Zabbix server service
            Code:
            sudo systemctl start zabbix-server.service;
          10. Review the following log files to determine if there were any startup errors
            1. /var/log/postgresql/postgresql-15-main.log
            2. /var/log/zabbix/zabbix_server.log
          11. Backup the current production database using pg_dump and custom formatting
            Code:
            sudo -u postgres pg_dump -Fc zabbix | gzip &amp;gt; zabbix_backup_dump.sql.gz;
          12. Then. restore to the new database server (with the same version of TimescaleDB installed) from the backup
            Code:
            zcat /mnt/data/zabbix_backup_dump.sql.gz | sudo -u postgres pg_restore -d zabbix;
          From this point, I was able to simply follow the PostgreSQL instructions for performing an in-place upgrade using pg_upgrade.

          Comment


          • dieselfluxcapacitor
            dieselfluxcapacitor commented
            Editing a comment
            Note that there should be a '>' character between 'gzip' and 'zabbix_backup_dump.sql.gz;'. This must have been translated when copy/paste into the Zabbix forum
        • jjeff123
          Member
          • May 2022
          • 33

          #6

          I'm trying to do the same thing, but I'm failing at either step 11 or 12. as far as I can tell.

          When I do my restore, I'm getting a bunch of errors and warnings. See below.

          I've tried a couple different things.

          Did a complete normal install of zabbix and timescale on new server, then restored database. I got a lot of errors similar to the ones seen below, a few hundred total. Zabbix started and had all my hosts, but I had no history data.

          On new server, created new zabbix database and enabled timescale only. Assuming the restore would create the schema. This failed completely, zabbix fails to start, saying the database isn't zabbix database

          My backup and restore is
          pg_dump -Z0 -j 10 -Fd zabbix -f /oldserverdata/backup5
          sudo -u postgres pg_restore -j 4 -Fd -O -d zabbix /data/scratch/backup5


          pg_restore: error: COPY failed for table "_hyper_16_5495_chunk": ERROR: function base36_decode(text) does not exist
          LINE 1: CAST(base36_decode(substring(cuid FROM 2 FOR 8))/1000 AS int...
          ^
          HINT: No function matches the given name and argument types. You might need to add explicit type casts.
          QUERY: CAST(base36_decode(substring(cuid FROM 2 FOR 8))/1000 AS integer)
          CONTEXT: PL/pgSQL function public.cuid_timestamp(character varying) line 3 at RETURN
          COPY _hyper_16_5495_chunk, line 1: "cm54n3cgondkbocusce3yhlp6 \N System 1735176173 2 15 498320 \N Interface Gi1/0/16(): Duplex status c..."
          pg_restore: error: COPY failed for table "chunk": ERROR: insert or update on table "chunk" violates foreign key constraint "chunk_hypertable_id_fkey"
          DETAIL: Key (hypertable_id)=(12) is not present in table "hypertable".
          pg_restore: error: COPY failed for table "chunk_index": ERROR: insert or update on table "chunk_index" violates foreign key constraint "chunk_index_chunk_id_fkey"
          DETAIL: Key (chunk_id)=(5177) is not present in table "chunk".
          pg_restore: error: COPY failed for table "chunk_constraint": ERROR: insert or update on table "chunk_constraint" violates foreign key constraint "chunk_constraint_chunk_id_fkey"
          DETAIL: Key (chunk_id)=(4749) is not present in table "chunk".
          pg_restore: error: COPY failed for table "compression_chunk_size": ERROR: insert or update on table "compression_chunk_size" violates foreign key constraint "compression_chunk_size_chunk_id_fkey"
          DETAIL: Key (chunk_id)=(5019) is not present in table "chunk".
          pg_restore: error: COPY failed for table "dimension_slice": ERROR: insert or update on table "dimension_slice" violates foreign key constraint "dimension_slice_dimension_id_fkey"
          DETAIL: Key (dimension_id)=(2) is not present in table "dimension".
          pg_restore: error: could not execute query: ERROR: cannot truncate only a hypertable
          HINT: Do not specify the ONLY keyword, or use truncate only on the chunks directly.
          Command was: TRUNCATE TABLE ONLY public.history;

          pg_restore: error: could not execute query: ERROR: current transaction is aborted, commands ignored until end of transaction block
          Command was: COPY public.history (itemid, clock, value, ns) FROM stdin;
          pg_restore: error: could not execute query: ERROR: chunk not found
          DETAIL: schema_name: _timescaledb_internal, table_name: _hyper_1_4700_chunk
          Command was: CREATE INDEX history_clock_idx ON public.history USING btree (clock DESC);


          pg_restore: error: could not execute query: ERROR: operation not supported on hypertables that have compression enabled
          Command was: ALTER TABLE ONLY public.history
          ADD CONSTRAINT history_pkey PRIMARY KEY (itemid, clock, ns);


          pg_restore: error: could not execute query: ERROR: cannot truncate only a hypertable
          HINT: Do not specify the ONLY keyword, or use truncate only on the chunks directly.
          Command was: TRUNCATE TABLE ONLY public.history_uint;

          pg_restore: error: could not execute query: ERROR: current transaction is aborted, commands ignored until end of transaction block
          Command was: COPY public.history_uint (itemid, clock, value, ns) FROM stdin;
          pg_restore: error: could not execute query: ERROR: cannot truncate only a hypertable
          HINT: Do not specify the ONLY keyword, or use truncate only on the chunks directly.
          Command was: TRUNCATE TABLE ONLY public.history_str; zz0.s8pg50llnjzz


          Comment

          • dieselfluxcapacitor
            Junior Member
            • Jul 2023
            • 11

            #7
            Are the versions of TimescaleDB and PostgteSQL the same between the old server, backup, and new server? That's where I got caught.

            It also looks like you are using the "directory" format with pg_dump / pg_restore. I looked at the man page; this shouldn't make a difference as it would be capturing the current schema, but you might try using -Fc instead.

            Comment

            • jjeff123
              Member
              • May 2022
              • 33

              #8
              I ended up realizing that my server really only did one thing, so I just copied the entire postgres data folder to my new box, fixed permissions and started the services.
              I don't know why, but I completely failed the first time I tried it and had to do it twice. But 2nd time it worked perfectly.
              I just had to make sure both the old box and new were on the same exact versions of timescale and postgres.

              Comment

              Working...