Ad Widget

Collapse

Zabbix Database - config backup only TABLES

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • simonuk1
    Member
    • Mar 2009
    • 66

    #1

    Zabbix Database - config backup only TABLES

    I am presently looking at the backup script from here :



    Looking at it its a for few version of zabbix ago, it has a list of tables and marked against a few is DATA, this tells the script only to backup the schema for that table and not the data.

    I have got the table list for version 3.4.6 and there are a lot of new tables.

    Does anyone know if any of the new tables would be considered to be DATA ?

    PDF attached.

    Cheers

    Simon
    Attached Files
  • kaspars.mednis
    Senior Member
    Zabbix Certified Trainer
    Zabbix Certified SpecialistZabbix Certified Professional
    • Oct 2017
    • 349

    #2
    No, the database schema does not add new tables with big history DATA
    the script is still correct.

    Regards,
    Kaspars

    Comment

    • louis-m
      Member
      • Nov 2013
      • 70

      #3
      Hi,
      can I just confirm that this script still works? So even if the database has extra tables added eg with new zabbix versions, this script will still pick them up but exclude the data from the large DATA tables which are flagged in the script?

      Comment

      • kloczek
        Senior Member
        • Jun 2006
        • 1771

        #4
        There are few problems with exactly this backup script:
        1) it uses myqldump without locking tables so by definition it is quite big probability that backup will be inconsistent as during making backup zabbix server, proxy or web frontend still will be making modifications
        2) there are two typical scenarios when the backup is necessary:
        a) to recover database content after the system fault
        b) to create database slave instance
        In second case this backup is useless because the backup does not preserve current binary (to store it needs to be enabled binary logs in MySQL engine settings)

        Generally speaking, bigger database than:
        1) to make the consistent backup master database using mysqldump isn't correct approach as database needs to be locked
        2) making the backup using dump to text form takes very long time and restore it using such dump takes even longer

        To make the backup of enough big database few things needs to prepared around:
        1) DB backend must consist of two instances: master and slave
        2) in case of master DB failure slave can be promoted as the new master and nothing needs to be restored from the backup. This minimises possible zabbix service downtime to actual seconds. On promoting new slave as new master zabbix server needs to be restarted to resync data from proxies
        3) in case of the master fault new slave needs to be recreated ASAP
        4) to recreate enough big database slave instance on mater DB storage needs to have the ability to create the file system snapshots with lowest possible snapshot overhead.
        5) master needs to store binary logs for the period not shorter than the backup cycle (it means that in equation giving the size of the storage used by DB backend needs to be added binary logs size factor

        Possible solutions in case OSes:
        1) Linux -> at least btrfs or zfs (using LVM is possible but it consumes way more storage to preserve extents modified from snapshot and snapshot IO performance penalty is very big)
        2) FreeBSD -> zfs
        3) Solaris -> zfs

        On making biggest databases backups practically zfs is only solution (as storage used by DB backend).
        Another possibility is related to used snapshots on storage layer provided by for example EMC storage. In such case, even ext4 could be used (in case of Linux).

        If at least above conditions will be fulfiled (there are few other minor details which need to be added to this picture) to make the backup all what needs to be done is:
        1) flush all not written data (which are still only in memory)
        2) lock database for write operations
        3) flush all not written data (this second flush will take way less than first one)
        4) store current binary position in the text file
        5) create the snapshot (usually it will take less than few seconds)
        6) unlock the database for write operations
        7) generate the stream of data out of the snapshot to preserve it as the backup or to restore out of this stream whole volume on slave database on another system
        8) because in the snapshoted volume has been created text file with the current binary position with this stream of data on copy snapshot
        to another system content of this file can be used on "CHANGE MASTER" quey ca be used (and by this whole backup is self-contained to use it to recreate slave)

        All those details says only one thing that bigger database than using any "general backup/restore script" make less sense as some operations needs to be done:
        1) on more than one system
        2) with typical root permission in case btrfs or with additionally delegated on zfs layer create snapshot, send/receive operations
        3) using AWS RDSes has limited possibility to use up to the scale when making backup has no IO impact on master (zfs send/receive has waaay lover impact than AWS backup procedure)

        These are not all details but more or less it is kind of scaffold on which people with people with decent DB and storage skills would be able to hang all not fully listed here details.
        The subject is enough complicated to make out of it probably quite juicy full hour presentation.
        Last edited by kloczek; 10-03-2018, 13:10.
        http://uk.linkedin.com/pub/tomasz-k%...zko/6/940/430/
        https://kloczek.wordpress.com/
        zapish - Zabbix API SHell binding https://github.com/kloczek/zapish
        My zabbix templates https://github.com/kloczek/zabbix-templates

        Comment

        • louis-m
          Member
          • Nov 2013
          • 70

          #5
          Wow. Thank you for the detailed explanation and the time to respond.
          With all the above in mind, what would you generally recommend to do the backup.
          For our case, it is acceptable to actually stop zabbix (every now and again) to take a backup eg with version change or lots of config changes.
          So we could stop it, back it up via mysqldump to an NFRS share and then restart and use that backup should the worst happen.

          Comment

          • kloczek
            Senior Member
            • Jun 2006
            • 1771

            #6
            Exact techniques/approach depends on few factors:
            1) your and/or your colleagues skills and knowledge
            2) available resources (hardware, human resources, time and money)
            3) flow of the zabbix data measured in NVPS, and predicted increase of this flow in let's say next +year.

            The whole set of possibilities provides continuity/full spectrum of the possibilities from point of view only complexity. There are few filaments of this whole construction here but even those filaments can be moved around depends on other things.
            There is no one hammer which could be used everywhere or using such hammer will be pointless from point of view only cost (will be too expensive). Not everyone needs to use zfs because sometimes, for example, is possible already use block storage with snapshotting abilities By this OS layer exact architecture can be sometimes simpler.
            http://uk.linkedin.com/pub/tomasz-k%...zko/6/940/430/
            https://kloczek.wordpress.com/
            zapish - Zabbix API SHell binding https://github.com/kloczek/zapish
            My zabbix templates https://github.com/kloczek/zabbix-templates

            Comment

            Working...