Ad Widget

Collapse

Migrating large old Zabbix deployment to new servers

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • gmmsolutions
    Junior Member
    • Nov 2023
    • 3

    #1

    Migrating large old Zabbix deployment to new servers

    Hello all,

    Looking for some advice here. I have a large-ish Zabbix deployment of a DB server, Processing server, and 3 proxy servers - watching 1000+ hosts at 3000+ values per second. Unfortunately, all of the servers are running on old CentOS 6/7 servers. I would like to get these all moved to new Ubuntu 22.04 servers. The proxy servers should be relatively easy to do, since they run MySQL and a simple DB dump and push should work. The tricky part will be the database server as it currently runs PostgreSQL version 11.8 and TimescaleDB 1.7.0-dev.

    So my question is, what would be the most elegant way to get the current CentOS builds moved over to new Ubuntu servers, without having to rebuild the entire stack from scratch.

    Thank you all for any advice and/or info you can provide.
  • tim.mooney
    Senior Member
    • Dec 2012
    • 1427

    #2
    Pick the version of Zabbix you want to run in the new environment, e.g. 6.0.23 LTS.

    Carefully review the Upgrade Notes and the Known Issues for whatever X.Y version (e.g. 6.0) that you're upgrading to. There may be special steps you need to take to ensure the database upgrade works. With Zabbix upgrades, it's all about making sure the database DDL changes work when the new version applies the SQL changes to your old database, to bring it up to the expected schema version.

    Install that environment from scratch on your new hosts, using a newer version of PostgreSQL and TimescaleDB on your new DB server, whatever versions are required by the version of Zabbix you chose.

    Don't worry about all your config and data from the old environment, that will be handled in the next step, after you've validated that your new install (with no hosts or data) is working.

    Shut down your old Zabbix server so nothing is writing to the database.

    Do an SQL dump/export of the database. My site doesn't use TimescaleDB so I don't know if you need to do anything special there, but that's mainly going to be the history* tables.

    Copy the SQL export from your old environment to the new database server.

    Shut down your new zabbix-server processes. They need to be offline while you're importing your old database.

    Delete everything within the 'zabbix' database on your new PostgreSQL install. Don't leave anything behind, make sure the database is just an empty shell.

    Import your old SQL dump/export from your old environment into the empty database. At this point you have a new version of Zabbix (e.g. 6.0.23), but you've just "reset" the database to your old version. That's fine, it's what you want.

    Moment of truth: start up the zabbix-server on the new Zabbix server. It should connect to your Postgresql database, recognize that it's for an old version, and start applying schema changes to bring it up to date. Assuming you've carefully followed any special steps in the Known Issues and Upgrade Notes documentation, the schema changes should proceed automatically. When it's done, you will have an almost correct database for your version.

    Assuming you're updating from pre-6.0 to something that is 6.0 or later, one of the post-upgrade steps you must perform is to apply primary keys to your history tables. When you fresh-installed Zabbix 6.0 or later in your new environment and set up your database, that database started out with primary keys for all tables. When you blew that away (to make room for your old database), you went back to a database version that does not have primary keys for the history* tables. The automatic database upgrade procedure applied all the other necessary changes to bring your database up to whatever version of Zabbix you're using, but it did not add primary keys. That step can take ages for large installs, so it's a post-upgrade step that must be done manually. Follow the procedure in the docs for adding primary keys and after that eventually finishes, your database will be correct for whatever version of Zabbix you're using.

    Because you're doing this in a new environment, you can do "dry runs", to test how the automatic upgrade is going to go. If some part of the automatic database update fails, you can fall back to your old environment, figure out what you need to do to fix the problem, and re-attempt. You just need to completely clean out the database before each attempt to re-import the SQL dump from your old environment.​

    Comment

    • cyber
      Senior Member
      Zabbix Certified SpecialistZabbix Certified Professional
      • Dec 2006
      • 4806

      #3
      With timescale I did data dump to csv. not the dbdump with all the tables etc.. inserting data from csv to new db takes care of all those timescale "partitions" etc by creating them during the insert.. It seemed easier than trying to deal with all the TS and PG version differences, which makes importing old dumps to new versions impossible etc..

      When creating a new platform, do not create a DB with new schema.. find that package of your old version and use that schema.sql from there...
      Primary keys... It only affects history and trends. rest of the tables you can import without any issues (IIRC). You can actually apply those primary keys to your new platform with old db schema before inserting any data (to avoid doing all the work in double when adding primary keys later).. history and trends tables are identical, just new one has primary keys, old one not... BUT.. here's one catch.. you need to clean up your data of duplicate entries before inserting.. If you know a good DBA who can do it before exporting from old platform, you are in luck..
      You can also export only last day of data at first, insert to new db, start it all up and after that deal with exporting and importing of older data, day by day ...​

      Comment

      • gmmsolutions
        Junior Member
        • Nov 2023
        • 3

        #4
        Thank you both. I very much appreciate the detailed responses. cyber I'm very interested in your idea of exporting/importing the last day of data from the old database to the new database. I don't really have the space to make a full database export, so I think only a few days would be perfect. Do you have any experience with that, and if so what would be the best way to accomplish that?

        Again, appreciative of any help/advice.

        Comment

        • cyber
          Senior Member
          Zabbix Certified SpecialistZabbix Certified Professional
          • Dec 2006
          • 4806

          #5
          Originally posted by gmmsolutions
          Thank you both. I very much appreciate the detailed responses. cyber I'm very interested in your idea of exporting/importing the last day of data from the old database to the new database. I don't really have the space to make a full database export, so I think only a few days would be perfect. Do you have any experience with that, and if so what would be the best way to accomplish that?

          Again, appreciative of any help/advice.
          I copied from one DB directly piping to another... just need to replace timestamps with correct ones (00:00:00 to 23:59:59 of needed day)
          Code:
          psql -c "COPY (select * from history where clock >timestamp and clock < timestamp_2) TO STDOUT DELIMITER ',' CSV" zabbix | psql postgresql://user:password@newPG:5432/zabbix -c "COPY history FROM STDIN CSV;"
          Same for history_uint, trends, trends_uint. For me history_str and history_text were small enough to copy over in one go...
          I probably even wrote some kind of shell script to create those timestamps and format that command line..
          If you have duplicate rows, it will break there, if you add primary keys before inserting... I did not care too much, added them to empty tables, ran script, waited until it breaks, found those duplicates erased them, tried again.. Some preparation works beforehand might eliminate it. I was too impatient... If you do not add primary keys before, then you need to do that conversion later, which takes as much time as inserting all of it ... basically you will need twice the time..​
          Last edited by cyber; 30-11-2023, 11:34.

          Comment

          • gmmsolutions
            Junior Member
            • Nov 2023
            • 3

            #6
            Thank you. I will work towards doing that and will respond in this thread if I have any issues.

            Comment

            • Jason
              Senior Member
              • Nov 2007
              • 430

              #7
              Originally posted by cyber
              With timescale I did data dump to csv. not the dbdump with all the tables etc.. inserting data from csv to new db takes care of all those timescale "partitions" etc by creating them during the insert.. It seemed easier than trying to deal with all the TS and PG version differences, which makes importing old dumps to new versions impossible etc..

              When creating a new platform, do not create a DB with new schema.. find that package of your old version and use that schema.sql from there...
              Primary keys... It only affects history and trends. rest of the tables you can import without any issues (IIRC). You can actually apply those primary keys to your new platform with old db schema before inserting any data (to avoid doing all the work in double when adding primary keys later).. history and trends tables are identical, just new one has primary keys, old one not... BUT.. here's one catch.. you need to clean up your data of duplicate entries before inserting.. If you know a good DBA who can do it before exporting from old platform, you are in luck..
              You can also export only last day of data at first, insert to new db, start it all up and after that deal with exporting and importing of older data, day by day ...
              I'm curious as to why not the dbdump and pipe that straight into the new database? I've used that before to move data between servers and was planning to use it again although this time I have timescale running. As long as the timescale extension is loaded on the target end then surely it should just create all the tables and chunks straight off?

              Comment

              • cyber
                Senior Member
                Zabbix Certified SpecialistZabbix Certified Professional
                • Dec 2006
                • 4806

                #8
                The dbdump in my text might have not exactly the "db_dump" command .. I am pretty sure, I just used it to just indicate a dump of a db..
                I am no dba, so half of the time I was struggling to find things that work for me.. For now I am pretty sure, I never even touched "db_dump" command..:P Might come in handy next time..:P

                Comment

                • Jason
                  Senior Member
                  • Nov 2007
                  • 430

                  #9
                  Originally posted by cyber
                  The dbdump in my text might have not exactly the "db_dump" command .. I am pretty sure, I just used it to just indicate a dump of a db..
                  I am no dba, so half of the time I was struggling to find things that work for me.. For now I am pretty sure, I never even touched "db_dump" command..:P Might come in handy next time..:P
                  On the destination server, before I've done to dump the data straight over. It's taken about an hour or so to shift a 300GB database.
                  pg_dump -h <source server> -U zabbix zabbix | psql zabbix

                  Planning to use the same thing this time although as it's a new database server can test and time it to plan the actual migration.

                  Comment

                  • cyber
                    Senior Member
                    Zabbix Certified SpecialistZabbix Certified Professional
                    • Dec 2006
                    • 4806

                    #10
                    As I moved from DB without all the primary keys (4.4) to the one with all the primary keys (6.0) and I applied those key patches before inserting.. then all the inserting broke, when it found a duplicate entry.. And there was plenty... So IIRC, I just split it up exporting by days and cleaned up data when needed... I had about 1.2T to move at the time..:P Took a whole day to clean up and finish all the import...:P

                    Comment

                    • Jason
                      Senior Member
                      • Nov 2007
                      • 430

                      #11
                      For reference my method above works fine. Just need to make sure that you use superuser accounts when connecting to old database so can read all the hypertable properties correctly.

                      Comment

                      Working...