Ad Widget

Collapse

Backing up Zabbix MySQL DB

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • johanpre44
    Member
    • Apr 2006
    • 40

    #1

    Backing up Zabbix MySQL DB

    I Came across an open-source community supported tool for backing up the zabbix database. The tool is called mysql-zrm (ZRM means Zmanda recovery manager). These people have a guide for configuring and scheduling a MySQL DB backup (see http://www.zmanda.com/quick-mysql-backup.html) and it takes only 15 minutes to work through the document.

    If you are struggling with your backups I would suggest using this tool, it certainly made my life much easier.

    I'm now making daily config backups (all tables except history and trends) and a weekly complete backup. The daily backups (about 14MB worth of configuration) is backed up with MySQL running and takes about 5 minutes. The backup file is also compressed. I use logical backups, which means that you can restore them like when you create the ZAbbix DB (preoblem is that you need to restore/import it into an empty/new DB so that you don't have any problems). I have not yet tested the restore, but I'm sure it will work. I will also post the weekly backup results with a complete restore.

    My complete Zabbix DB is about 2.3 GBs and I've been monitoring (25 hosts and a total of 1190 items) since beginning of October 2006. I only store history on most of my items for 7 days and trends for 365 days. The DB does not grow too much at the moment.
  • johanpre44
    Member
    • Apr 2006
    • 40

    #2
    Restore Works

    I've just tested a restore with mysql-zrm and it works brilliantly. To make the restore as easy as possible I suggest that you also keep backups of your /etc/zabbix/* and /etc/mysql-zrm/*

    With mysql-zrm you do not even have to create a zabbix db, just run the restore!

    Once all these files have been restored on a server with a minimal install of Fedora core 5 or 6 with zabbix installed from the extras repo, the restore took me about 45 minutes to an hour (my DB data was about 750 MB with all the history.) This was done a a virtual machine (using vmware server free edition on a Dell 2950, VM had only 512 MB RAM)

    Comment

    • James Wells
      Senior Member
      • Jun 2005
      • 664

      #3
      Greetings,

      Another option is AutoMysqlBackup. This is a simple shell script that you can find on Google. We currently use this to backup 4 of our Zabbix DB's on a daily (Incremental), weekly (Differential), monthly (Full) rotation. Setting up the inital backups, following directions, took about 5 minutes.

      It labels the backups as <DB Name>-<YYYYMMDD>-<Rotation>.sql, then gzips them. The 4 DB's we currently backup each collect about 2GB per day and take about 5 minutes each for the daily backups, about 10 minutes for the weekly backups, and about 3 minutes for the monthly backups.

      Unfortunately, we can't use this method on our main production servers, but the smallest of them is generating somewhere around 4GB per day right now where as our biggest is generating about 21GB per day.
      Unofficial Zabbix Developer

      Comment

      • phork
        Member
        • Nov 2006
        • 38

        #4
        "I have not yet tested the restore, but I'm sure it will work."

        Famous last words

        Comment

        • painintheass
          Junior Member
          • Sep 2007
          • 8

          #5
          You ASS WIPE

          Originally posted by johanpre44
          I've just tested a restore with mysql-zrm and it works brilliantly. To make the restore as easy as possible I suggest that you also keep backups of your /etc/zabbix/* and /etc/mysql-zrm/*

          With mysql-zrm you do not even have to create a zabbix db, just run the restore!

          Once all these files have been restored on a server with a minimal install of Fedora core 5 or 6 with zabbix installed from the extras repo, the restore took me about 45 minutes to an hour (my DB data was about 750 MB with all the history.) This was done a a virtual machine (using vmware server free edition on a Dell 2950, VM had only 512 MB RAM)
          Running Zmanda / Zmanda application in an enviroment where Bacula is used????? You should be shot! Let me give you a quick lesson in Zabbix - Bacula. Seeing as you have Bacula and you are already backing up Squid, SARG and Dansguardian on ZAbbix/Firewall you can just add a ClientRunbeforeJob to the Bacula Job specified for your Zabbix/Proxy/Firewall Server and because I know you know jack-shit about Bacula because you do nothing where you work except surf the Net the whole day,
          I'll make it simple for you and give you the details.

          Under the Job section of the bacula-dir.conf file add the following under the firewall's Job ,in between JobDefs = and WriteBootStrap =, ClientRunBeforeJob = "/usr/bin/mysqldump --opt zabbix (this is the DB name in case you don't know) > </path/to/backupdir/backup-file-name>.sql" . Bacula will automatically create the new backup every night and it will be written to Disk/Tape along with the rest of your Daily Backups. Please remember to also add the </path/to/backupdir/backup-file-name>.sql to the Firewall's FileSet.

          No mess no fuss and it is automated

          Comment

          • Alexei
            Founder, CEO
            Zabbix Certified Trainer
            Zabbix Certified SpecialistZabbix Certified Professional
            • Sep 2004
            • 5654

            #6
            Please, be more polite.
            Alexei Vladishev
            Creator of Zabbix, Product manager
            New York | Tokyo | Riga
            My Twitter

            Comment

            • JancovdM
              Junior Member
              • Nov 2007
              • 5

              #7
              Backing up Zabbix DB using Bacula

              If you use Bacula on your site you can use it to backup the Zabbix DB details are as follow:

              presumably you have already set up a Bacula-FD on the Zabbix machine, the entries in the bacula-dir.conf will look something like this:

              Under the Job Resource

              Job {
              Name = zabbix
              JobDefs = zabbix
              ClientRunBeforeJob = "/usr/bin/mysqldump --opt zabbix /path/to/backupdir/zabbix.sql"
              WriteBootStrap = "/path/to/var/bacula/zabbix.bsr"

              ClientRunBeforeJob will execute the command on the zabbix box before the backup starts, now you have to make sure that you specify the backup path in the Zabbix FileSet like so:

              FileSet {
              Name = Zabbix
              Include {
              Options {
              IgnoreCase = yes
              other options specified in Bacula Manual
              }
              File = /path/to/backupdir/zabbix.sql
              File = other files i.e. /etc/zabbix
              }
              }

              Bacula will create the sql dump file at the specified time and then back it up to either disk or tape. As the previous person said no mess no fuss and you don't have worry about it becuase Bacula will notify you of the status of the command and the backup.

              Comment

              • Niels
                Senior Member
                • May 2007
                • 239

                #8
                Do these methods take proper care of keeping the integrity of the database? A simple mysqldump might result in a corrupt backup, if it's done while Zabbix is running. Does Bacula block the database, and how does that work?

                What's the officially recommended method from the Zabbix devs?

                Comment

                • eli.stair
                  Junior Member
                  • May 2006
                  • 20

                  #9
                  single-transaction non-locking dump

                  In lieu of any specific backup software methods, I use this generic method for backing up my database contents, and cycle through timestamped versions of this in case of recovery. Pipe through bzip2 to cut down on file size. I perform this from the replicated slave that I grant general read-only access to Zabbix info through, thus avoiding any impact to the running collecting db server which is /heavily/ loaded already.

                  'mysqldump --single-transaction --quick [--all-databases]' will dump a coherent snapshot of the database(s), without locking any tables. Used with the --all-databases, this includes your user/permission info from the 'mysql' database, making it trivial to recover a full snapshot if all your db server is doing is to act as a repository for zabbix in the event a server rebuild is necessary.

                  /eli

                  Comment

                  • globifrosch
                    Member
                    • Sep 2005
                    • 74

                    #10
                    Originally posted by eli.stair

                    'mysqldump --single-transaction --quick [--all-databases]' will dump a coherent snapshot of the database(s), without locking any tables.

                    /eli
                    Note that --single-transaction only works with InnoDB engine not with MyISAM. For MyISAM try "mysqlhotcopy", it's faster than mysqldump.

                    Comment

                    • Niels
                      Senior Member
                      • May 2007
                      • 239

                      #11
                      Originally posted by eli.stair
                      In lieu of any specific backup software methods, I use this generic method for backing up my database contents, and cycle through timestamped versions of this in case of recovery. Pipe through bzip2 to cut down on file size. I perform this from the replicated slave that I grant general read-only access to Zabbix info through, thus avoiding any impact to the running collecting db server which is /heavily/ loaded already.
                      Thanks, this sounds good. Have you successfully done a restore? Any tips on setting up such a replication?

                      Comment

                      • JancovdM
                        Junior Member
                        • Nov 2007
                        • 5

                        #12
                        MySQL Backup using Bacula

                        Bacula is intelligent enough to take care of the the possible SQL dump file corruption but if you are paranoid you can always replace the command, shown earlier int post, with a script that stops the Zabbix server, create the SQL dump file and start the Zabbix Server again. This sane procedure is used to backup the Bacula Catalog so I have faith that it'll work for Zabbix as well, I have tested it a couple of times with numorous zabbix installations and it all worked including the Master - Child Node setup

                        Script 1:
                        (this will be entered in the ClientRunBeforeJob)
                        Script Name ---> zabbix_backup.sh

                        #!/bin/bash
                        # Stop Zabbix processes
                        killall zabbix_server
                        killall zabbix_agent(d)

                        # Create the SQL dump file
                        /path/to/mysqldump --opt zabbix /path/to/backupdir/zabbix_backup.sql


                        Script 2:
                        (This will be entered as a ClientRunAfterJob)
                        Script name --> start_zabbix.sh

                        #!/bin/bash
                        # Start the Zabbix processes
                        /usr/local/sbin/zabbix_server
                        /usr/local/sbin/zabbix_agent(d)


                        The Bacula conf:

                        Job {
                        Name = Zabbix
                        JobDefs = Zabbix
                        ClientRunBeforeJob = "/path/to/zabbix_backup.sh"
                        ClientRunAfterJob = "/path/to/start_zabbix.sh"
                        WriteBootStrap = "/path/to/zabbix.bsr
                        }

                        Please remember that these scripts are located on the Zabbix box and not on the Bacula box.

                        I know that these scripts are quite simple in comparison to other but it gets the job done........ please note that I'm not a bash expert and I'm quite lazy trying to write this enormous script just to do a couple of simple things

                        Comment

                        • tchjts1
                          Senior Member
                          • May 2008
                          • 1605

                          #13
                          Originally posted by James Wells
                          Greetings,

                          Another option is AutoMysqlBackup. This is a simple shell script that you can find on Google. We currently use this to backup 4 of our Zabbix DB's on a daily (Incremental), weekly (Differential), monthly (Full) rotation. Setting up the inital backups, following directions, took about 5 minutes.

                          It labels the backups as <DB Name>-<YYYYMMDD>-<Rotation>.sql, then gzips them. The 4 DB's we currently backup each collect about 2GB per day and take about 5 minutes each for the daily backups, about 10 minutes for the weekly backups, and about 3 minutes for the monthly backups.

                          Unfortunately, we can't use this method on our main production servers, but the smallest of them is generating somewhere around 4GB per day right now where as our biggest is generating about 21GB per day.
                          I just installed this and so far it is working quite nicely on our test machine.

                          Comment

                          • kikiche
                            Junior Member
                            • Mar 2008
                            • 8

                            #14
                            Exactly what I was looking for

                            The script automysqlbackup.sh.2.5 is really simpel to use and very easy to activate .
                            Exactly what I was looking for backing up my Zabbix data.

                            nice work

                            Comment

                            • welkin
                              Senior Member
                              • Mar 2007
                              • 132

                              #15
                              I have the problem that we have 3 nodes which are all under heavy load. Doing nightly dumps is not possible for me (a dump would probably last more than 10 hours) and dumping just the config is possible but i don't want to loose all my history. I like the idea of mysql replication, but as far as i know it is only possible with one master having multiple slaves and not with one slave ( the backup server ) having multiple masters. Any ideas to work around this problem?

                              thanks in advance

                              welkin

                              Comment

                              Working...