Ad Widget

Collapse

zabbix configuration backup (hosts, templates, users...)

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • JanCZ
    Junior Member
    • Jan 2018
    • 22

    #1

    zabbix configuration backup (hosts, templates, users...)

    Hi i found this script which seems to do exactly what i need - save configuration on my zabbix server, without history data for now, just templates, hosts etc.

    I was able to run in, connect to my database, and dump it. Problem is that the resulting backup contains practially nothing and has almost no size. We got around 750 hosts, many custom templates and such, so i would presume the file should be fairly large.

    Not sure what iam doing wrong. Please note i need to specifiy database and such, since there were some problems (this script trying to read info from zabbix_server.conf)

    Any ideas why its not containing data? I tried to search web, but could not find anything. Is it possible that i need to stop the zabbix server firs? Config data are static, so iam not worried about consistency of DB dump as i dont need history.

    Code:
    postgres@zabbix:/home/scripts/zabbix-backup-0.9.1$ ./zabbix-dump -t psql -Z -u postgres -d zabbix -p PASSWD -H 127.0.0.1
    Configuration:
     - type:     psql
     - host:     127.0.0.1 (localhost)
     - port:     5432
     - database: zabbix
     - user:     postgres
     - output:   /home/scripts/zabbix-backup-0.9.1
    Fetching list of existing tables...
    Starting table backups...
    Password:
    
    
    For the following large tables only the schema (without data) was stored:
     - --exclude-table-data=acknowledges
     - --exclude-table-data=alerts
     - --exclude-table-data=auditlog
     - --exclude-table-data=auditlog_details
     - --exclude-table-data=event_recovery
     - --exclude-table-data=events
     - --exclude-table-data=event_tag
     - --exclude-table-data=history
     - --exclude-table-data=history_log
     - --exclude-table-data=history_str
     - --exclude-table-data=history_text
     - --exclude-table-data=history_uint
     - --exclude-table-data=problem
     - --exclude-table-data=problem_tag
     - --exclude-table-data=task
     - --exclude-table-data=task_acknowledge
     - --exclude-table-data=task_close_problem
     - --exclude-table-data=task_remote_command
     - --exclude-table-data=task_remote_command_result
     - --exclude-table-data=trends
     - --exclude-table-data=trends_uint
    
    Compressing backup file...
    
    Backup Completed:
    /home/scripts/zabbix-backup-0.9.1/zabbix_cfg_localhost_20190729-1204_db-psql-3.4.7.sql.gz
    rm -f /tmp/tmp.mrkOqzWoTn
    
    ______________________________________________________________________________________________________________________________________
    
    postgres@zabbix:/home/scripts/zabbix-backup-0.9.1$ zcat zabbix_cfg_localhost_20190729-1204_db-psql-3.4.7.sql.gz
    --
    -- PostgreSQL database dump
    --
    
    -- Dumped from database version 10.9 (Debian 10.9-1.pgdg90+1)
    -- Dumped by pg_dump version 11.4 (Debian 11.4-1.pgdg90+1)
    
    SET statement_timeout = 0;
    SET lock_timeout = 0;
    SET idle_in_transaction_session_timeout = 0;
    SET client_encoding = 'UTF8';
    SET standard_conforming_strings = on;
    SELECT pg_catalog.set_config('search_path', '', false);
    SET check_function_bodies = false;
    SET xmloption = content;
    SET client_min_messages = warning;
    SET row_security = off;
    
    --
    -- PostgreSQL database dump complete
    --
    Last edited by JanCZ; 29-07-2019, 12:23.
  • JanCZ
    Junior Member
    • Jan 2018
    • 22

    #2
    Solved:
    For some reason i had fixed that i need to use user postgres, not zabbix - but the database user is zabbix so... I realized my mistake by checking database itself via pgadmin. Now it has 11G and counting, bit larger than i though

    Code:
    postgres@zabbix:/home/scripts/zabbix-backup-0.9.1$ ./zabbix-dump -t psql -Z -u zabbix -d zabbix -p PASSWD -H 127.0.0.1
    Last edited by JanCZ; 29-07-2019, 14:31.

    Comment

    • JanCZ
      Junior Member
      • Jan 2018
      • 22

      #3
      Another problem is, that iam using timescaledb to partition our database, and fore some reason, the dump started to dumping all the data (i ended up at 84G than stopped it). Iam not sure why he does that, the name of each hypertable should prevent it from duping - its not on the list in the script. Any ideas?

      Comment

      • maxhq
        Junior Member
        • Mar 2015
        • 1

        #4
        Hi JanCZ, I've only found your post yesterday.

        I am the author of said script and only integrated third party code for PostgreSQL. Meanwhile I set up an installation myself, fixes several script issues and added features.
        Esp. does the script no longer silently backup unknown tables or tables from another database / schema.
        Maybe you can give it another try: https://github.com/maxhq/zabbix-backup/releases

        Cheers, Jens

        Comment


        • tim.mooney
          tim.mooney commented
          Editing a comment
          This seems like a great tool. Thanks for making it available and for continuing to improve it!
      • TiagoTT
        Junior Member
        • Feb 2020
        • 1

        #5
        Hello JanCZ and Jens (maxhq),

        I have today used the newest version of the zabbix-backup script. VERSION=0.9.3
        Thank you for this tool.

        I am also using PostgreSQL+TimescaleDB and I had the same issue reported by JanCZ, the contents of the history* and trends* tables were not dumped directly but the TimescaleDB chunk tables were dumped (which is where the data is actually stored).
        Code:
        ./zabbix-dump -t psql -H database.server.name -P 5432 -Z -x
        I found that the TimescaleDB internal tables containing internal configuration and table chunks are in the _timescaledb_internal schema and not in the public schema like the other Zabbix tables.
        After I added the schema selection flag (-S public) on the command line the backup script behaved as expected and produced a small dump containing all Zabbix tables and data except the contents of the history* and trends* tables.
        Code:
        ./zabbix-dump -t psql -H database.server.name -P 5432 -S public -Z -x
        The help message of the zabbix-dump script mentions:
        Code:
            -S SCHEMA
                Name of database schema (PostgreSQL only).
                Default: public
        Which is a bit misleading because when no -S option is specified in the command line and there is no schema selected in Zabbix Server configuration file (or -Z flag was used and the configuration file is not read) there is actually no schema selected when running pg_dump and all schemas get dumped, including the _timescaledb_internal schema which holds the actual data of the history* and trends* tables.

        Jens, I don't know what should change, the help message or the code, but I opened a PR with the little code change that makes the default value actually be used:
        Solves the issue of accidentally dumping history* and trend* data, because that data can be indirectly stored in the internal chunk tables of TimescaleDB inside schema _timescaledb_internal.


        Best regards,
        Tiago
        Last edited by TiagoTT; 13-02-2020, 19:09.

        Comment

        Working...