Ad Widget

Collapse

Backup of zabbix Database

Collapse
This topic has been answered.
X
X
 
  • Time
  • Show
Clear All
new posts
  • MADxHAWK
    Member
    • Apr 2011
    • 41

    #1

    Backup of zabbix Database

    Hello everyone.

    we are using zabbix to monitor our network with over 1000 hosts. the Database is quite large now and ists hard to do a complete backup.
    Now we ware thinking about to just backup the config tables / templates but not the collected data.

    Anyone can point out the nessesarey tables to backup for zabbix 6.x so you can restor at least the config and templates in case of a crash?

    All i found was a backup script but that was for zabbix 4.x or 5.x

    Thanks in advance
  • Answer selected by MADxHAWK at 01-09-2022, 09:36.
    MADxHAWK
    Member
    • Apr 2011
    • 41

    Hello derli1988

    yes, i have 2 virtual linux machines running ubuntu 20.04 on different ESXi hosts.
    Host 1 is the zabbix server with mysql master
    Host 2 is the mysql slave system
    I use a script that stops the slave, doing the dump and then restart the slave again. At the moment the database size is around 65GB and it takes up to 12 hours to make a mysql dump from it what i dont realy understand an what im currently investigating.
    Strangely enough the time the mysqldump takes keeps increasing even the size of the database only changes slightly. Every day it takes approx. 30 minutes longer to make the mysqldump.

    The old system was on a usual desktop PC also running ubuntu 20.04 but mysql 5.x instead of 8,x. This database is about 250-260GB in size and a mysql dump "only takes about 5 hours.
    im still trying to figure out why a mysqldump takes so much longer on a server with way more CPU power, memory and a raid system then on a usual PC with just a single Hard drive.

    Normaly i would say the slave system is enough as a backup but there is a problem, if the master gets corrupted somehow, the slave will be corruptet also, thats why keep backups of the last 7 days on top.
    Last edited by MADxHAWK; 01-09-2022, 09:36.

    Comment

    • derli1988
      Member
      • Apr 2021
      • 60

      #2
      I have this script to back up the entire database and upload the dump into AWS S3

      Code:
      #!/bin/bash
      #!/usr/bin
      
      echo "defining variables"
      USER="user" #should be sysops
      PASS="passowrd"
      YEAR=$(date +%Y)
      MONTH=$(date +%m)
      DAY=$(date +%d)
      OLDDATE=$(date +%F --date="1 week ago")
      DIR=/opt/backup/zabbix
      mkdir -p /opt/backup/zabbix
      
      echo "Creating the backup files"
      cd $DIR
      mysqldump --user=$USER --password=$PASS --lock-tables=false zabbix -C -q > $DIR/zabbix_server-$(date +%F).sql
      
      ST=$?
      
      #echo "creating the folder hierarchy"
      #if [ -d "$DIR/$YEAR" ]
      #then
      #        echo "Directory $DIR/$YEAR exists."
      #else
      #        echo "Directory $DIR/$YEAR does not exist, creating it."
      #        mkdir $DIR/$YEAR
      #fi
      #
      #if [ -d "$DIR/$YEAR/$MONTH" ]
      #then
      #        echo "Directory $DIR/$YEAR/$MONTH exists."
      #else
      #        echo "Directory $DIR/$YEAR/$MONTH does not exist, creating it."
      #        mkdir $DIR/$YEAR/$MONTH
      #fi
      #
      #if [ -d "$DIR/$YEAR/$MONTH/$DAY" ]
      #then
      #        echo "Directory $DIR/$YEAR/$MONTH/$DAY exists."
      #else
      #        echo "Directory $DIR/$YEAR/$MONTH/$DAY does not exist, creating it."
      #        mkdir $DIR/$YEAR/$MONTH/$DAY
      #fi
      
      
      # if the database dump worked
      if [ $ST -eq 0 ]
      then
              echo "copying Zabbix binary, doc and conf files"
              cp -p /opt/script/mysqldump.sh $DIR/
              cp -p /etc/zabbix/zabbix_server.conf $DIR/
              cp -p /etc/zabbix/zabbix_agentd.conf $DIR/
              cp -rp /usr/sbin/zabbix_server $DIR/
              cp -rp /usr/share/doc/zabbix-* $DIR/
              cp -p /etc/apache2/conf-enabled/zabbix.conf $DIR/
              cp -rp /usr/share/zabbix/ $DIR/
      
              # Creating the archive
              zip -r -9 zabbix_server-$(date +%F).zip *
      
              ST=$?
      
              # Moving the backup to S3 bucket
              aws s3 mv zabbix_server-$(date +%F).zip s3://ra-zabbix-backup/Zabbix_Server/
      
              # if the backup was successful
              if [ $ST -eq 0 ]
              then
                      echo "deleting sql database files and zabbix conf files"
                      rm -f zabbix_server-$(date +%F).sql zabbix*.conf
                      rm -rf $DIR/
                      aws s3 rm s3://ra-zabbix-backup/Zabbix_Server/zabbix_server-$OLDDATE.zip
              fi
      fi
      ​
      what I would advise you to do is not only a DB backup, but some zabbix config for apache if it's running ubuntu or php if it's CentOS and don't do a full backup since the database is huge, but just take a backup of these tables:
      • actions
      • alerts
      • config
      • history
      • history_log
      • history_str
      • history_text
      • history_uint
      • host_inventory
      • hosts_groups
      • hosts_templates​
      • items
      • media
      • media_type
      • users
      • users_groups

      Best of luck

      Comment

      • MADxHAWK
        Member
        • Apr 2011
        • 41

        #3
        Thanks derli1988i will check it out.

        At the moment i have a master slave config of the database and using a script to stop the slave and make a mysql dump. But a dump of the complete database (currently about 70GB) takes over 10 hourstwith mysql 8.x

        Comment

        • derli1988
          Member
          • Apr 2021
          • 60

          #4
          Are you using MySQL to replicate the database to a replica/slave server, is that it?

          That's really smart!

          Now, even if the database is big, you can dump the whole DB into somewhere else, just how big is it? How long does your back up take?

          Comment

          • MADxHAWK
            Member
            • Apr 2011
            • 41

            #5
            Hello derli1988

            yes, i have 2 virtual linux machines running ubuntu 20.04 on different ESXi hosts.
            Host 1 is the zabbix server with mysql master
            Host 2 is the mysql slave system
            I use a script that stops the slave, doing the dump and then restart the slave again. At the moment the database size is around 65GB and it takes up to 12 hours to make a mysql dump from it what i dont realy understand an what im currently investigating.
            Strangely enough the time the mysqldump takes keeps increasing even the size of the database only changes slightly. Every day it takes approx. 30 minutes longer to make the mysqldump.

            The old system was on a usual desktop PC also running ubuntu 20.04 but mysql 5.x instead of 8,x. This database is about 250-260GB in size and a mysql dump "only takes about 5 hours.
            im still trying to figure out why a mysqldump takes so much longer on a server with way more CPU power, memory and a raid system then on a usual PC with just a single Hard drive.

            Normaly i would say the slave system is enough as a backup but there is a problem, if the master gets corrupted somehow, the slave will be corruptet also, thats why keep backups of the last 7 days on top.
            Last edited by MADxHAWK; 01-09-2022, 09:36.

            Comment

            • derli1988
              Member
              • Apr 2021
              • 60

              #6
              You can always try and use some other switches to optimize the dump process and pv to estimate how long your process is going to take, here's an example

              Code:
              mysqldump --user=$USER --password=$PASS zabbix -C -q | pv -s 9999M > zabbixWithSwitches.sql;
              sleep 30;
              mysqldump --user=$USER --password=$PASS zabbix | pv -s 9999M > zabbixWithoutSwitches.sql;
              sleep 30;
              Also, you might wanna check which table is the biggest and make a separate dump for that.

              Run that query on MySQL

              Code:
              SELECT table_name AS "Table",
              ROUND(((data_length + index_length) / 1024 / 1024), 2) AS "Size (MB)"
              FROM information_schema.TABLES
              WHERE table_schema = "zabbix"
              ORDER BY (data_length + index_length) DESC;
              ​

              Comment

              • MADxHAWK
                Member
                • Apr 2011
                • 41

                #7
                Thanks for your suggestions derli1988 i will try them. what i forgot to mention is, that i partitioned the database to reduce the load of the housekeeper ( Zabbix MySQL Database Table Partitioning (techexpert.tips )
                I also thought about to migrate from mysql to mariadb since i found quite a lot of complains about very slow mysqldump but not really a solution by now

                Comment

                • salad
                  Junior Member
                  • Sep 2022
                  • 2

                  #8
                  Just https://github.com/npotorino/zabbix-backup

                  Comment

                  Working...