Ad Widget

Collapse

Zabbix upgrade from 5.0 to 6.0; big Postgres database issue

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • ValentinZlate
    Junior Member
    • May 2021
    • 2

    #1

    Zabbix upgrade from 5.0 to 6.0; big Postgres database issue

    Hi,

    I have a small issue regarding the upgrade to 6.0. We have a pretty big PostgreSQL database (around 600 GB in size, and an estimate of 5.5 billion records).
    We would like to update to Zabbix 6.0 LTS.
    I have tested the upgrade and we can run 6.0 using the 5.0 database structure, but we would really like to follow best practices.

    I have read the upgrade guide on here for PostgreSQL:


    My questions are:
    1. Is it really necessary to move the data to a CSV file to a TMP table and to the newly created table that has the Primary Keys? Can't we just move data from the old table to the new table with the PKs created on it?

    1. We have tried various tests migrating the data between tables and even after a few days of waiting, none of the tests were able to complete successfully. I have thought of another idea: removing the duplicate records on the old tables and then creating the PKs. Would this also work? And if it would work... Would it have any negative consequences?

    Thank you

  • SunnyK
    Junior Member
    • Dec 2021
    • 7

    #2
    Hi there sorry am not able to answer your question, but out of interest why would you want to migrate your db as opposed to just creating a new instance? I'm going to assume you are obliged to keep a history of your performance metrics?

    We're upgrading and as CentOS 7 isn't supported anymore, we have to move to a new OS so are blowing everything away and starting afresh. Including the database. We've tested exporting data from v5 to v6 and that works OK so can be up and running again fairly quickly.

    Comment

    • wfeddern
      Junior Member
      • Aug 2022
      • 2

      #3
      I am wondering the same thing as the original poster doing upgrade from 4.2 up to 6. I can not see what is being accomplished by writing out the CSV and back to a temp table rather then just copying from the renamed _old tables. To further complicate things we have the history tables partitioned.

      Will likely do some experiments on our test system to see results of creating new partitioned tables, then just copying data directly from the _old renamed tables.

      Comment

      Working...