Hello all,
I have Zabbix server 3.4.6 running on a Ubuntu 14.04 LTS machine and the time has come for a platform upgrade.
The plan is to deploy a Ubuntu 18.04 LTS machine with Zabbix 5.0 LTS, then migrate the database and configuration files to the new environment.
I've performed this exact migration twice already on internal machines with tiny databases with very few issues, so I understand the general process and requirements for the task.
However, this time I'm working on a live environment with a sizeable database and every second of uptime is precious.
Therefore, my question is this - how do I perform the migration with the absolute minimum downtime possible?
Here is my current procedure:
1. Prepare server for Zabbix components:
========================================
apt update
apt -y install mariadb-common mariadb-server mariadb-client powershell traceroute python-pip
systemctl start mariadb
systemctl enable mariadb
mysql_secure_installation
2. Install Zabbix 5.0 LTS:
==========================
wget https://repo.zabbix.com/zabbix/5.0/ubuntu/pool/main/z/zabbix-release/zabbix-release_5.0-1+$(lsb_release -sc)_all.deb
dpkg -i zabbix-release_5.0-1+$(lsb_release -sc)_all.deb
apt update
apt -y install zabbix-server-mysql zabbix-frontend-php zabbix-apache-conf zabbix-agent
3. Create empty Zabbix DB:
==========================
mysql -uroot -p -e "create database zabbix character set utf8 collate utf8_bin;"
mysql -uroot -p -e "grant all privileges on zabbix.* to zabbix@localhost identified by '$pass';"
4. Import migrated components: (these come from the old server)
==============================
cp -rp /opt/zabbix_backup/zabbix /etc/
cp -rp /opt/zabbix_backup/ssl /etc/apache2/
cp -rp /opt/zabbix_backup/sites-available /etc/apache2/
cp -rp /opt/zabbix_backup/conf-available /etc/apache2/
5. Ensure Zabbix Server has correct permissions:
================================================
chown zabbix:zabbix /var/run/zabbix
chown zabbix:zabbix /var/log/zabbix
6. Set up apache:
=================
a2enconf zabbix.conf
a2enmod ssl
a2ensite 000-default.conf
a2ensite default-ssl.conf
systemctl reload apache2
systemctl restart apache2
apache2ctl configtest
apache2ctl -S
7. Import database: (the database is also dumped from the old server)
===================
mysql -uroot zabbix -p -e "set global innodb_strict_mode='OFF';"
zcat /opt/zabbix_backup/zabbix_backup.sql.gz | mysql -h localhost -uroot -p zabbix
mysql -uroot zabbix -p -e "set global innodb_strict_mode='ON';"
8. Create swap space:
=====================
fallocate -l 4G /swapfile
chmod 600 /swapfile
mkswap /swapfile
swapon /swapfile
nano /etc/fstab
swapon --show
9. Start Zabbix components:
===========================
sudo systemctl restart zabbix-server zabbix-agent
sudo systemctl enable zabbix-server zabbix-agent apache2
10. Configure netplan:
======================
cat /etc/netplan/50-cloud-init.yaml > /etc/netplan/01-netcfg.yaml
nano /etc/netplan/01-netcfg.yaml
netplan generate
netplan apply
netplan try
cat /etc/resolv.conf
This procedure has worked before, but stopping the Zabbix server, dumping the database, using scp to move it to the new server, restoring it and waiting for the database upgrade sounds like it could take at least an hour.
In addition, there is more work to do on the infrastructure side, such as swapping the internal/public IP addresses, changing the hostname and editing DNS entries to avoid editing the Zabbix agent config for thousands of devices.
One potential solution I've come across would be to dump only the configuration tables from the Zabbix DB, while keeping the old server running.
That way I could fully deploy the new live server without downtime concerns.
However, I would eventually need to import all the history tables into the new installation, which can be very problematic if the schema doesn't match on even a single table.
Is this a viable approach?
I'd be using a modified zabbix-dump script (adapted to 5.0.11) from here - zabbix-backup/zabbix-dump at master · maxhq/zabbix-backup · GitHub to create the backup without history.
I would then dump only the history tables with the following command:
mysqldump --single-transaction -uroot -p zabbix acknowledges alerts auditlog auditlog_details event_recovery event_suppress event_tag events history history_log history_str history_text history_uint item_rtdata problem problem_tag task task_acknowledge task_check_now task_close_problem task_remote_command task_remote_command_result trends trends_uint > history.sql
Finally, I'd apply the "history.sql" file to the new database, which will insert all the data.
I have Zabbix server 3.4.6 running on a Ubuntu 14.04 LTS machine and the time has come for a platform upgrade.
The plan is to deploy a Ubuntu 18.04 LTS machine with Zabbix 5.0 LTS, then migrate the database and configuration files to the new environment.
I've performed this exact migration twice already on internal machines with tiny databases with very few issues, so I understand the general process and requirements for the task.
However, this time I'm working on a live environment with a sizeable database and every second of uptime is precious.
Therefore, my question is this - how do I perform the migration with the absolute minimum downtime possible?
Here is my current procedure:
1. Prepare server for Zabbix components:
========================================
apt update
apt -y install mariadb-common mariadb-server mariadb-client powershell traceroute python-pip
systemctl start mariadb
systemctl enable mariadb
mysql_secure_installation
2. Install Zabbix 5.0 LTS:
==========================
wget https://repo.zabbix.com/zabbix/5.0/ubuntu/pool/main/z/zabbix-release/zabbix-release_5.0-1+$(lsb_release -sc)_all.deb
dpkg -i zabbix-release_5.0-1+$(lsb_release -sc)_all.deb
apt update
apt -y install zabbix-server-mysql zabbix-frontend-php zabbix-apache-conf zabbix-agent
3. Create empty Zabbix DB:
==========================
mysql -uroot -p -e "create database zabbix character set utf8 collate utf8_bin;"
mysql -uroot -p -e "grant all privileges on zabbix.* to zabbix@localhost identified by '$pass';"
4. Import migrated components: (these come from the old server)
==============================
cp -rp /opt/zabbix_backup/zabbix /etc/
cp -rp /opt/zabbix_backup/ssl /etc/apache2/
cp -rp /opt/zabbix_backup/sites-available /etc/apache2/
cp -rp /opt/zabbix_backup/conf-available /etc/apache2/
5. Ensure Zabbix Server has correct permissions:
================================================
chown zabbix:zabbix /var/run/zabbix
chown zabbix:zabbix /var/log/zabbix
6. Set up apache:
=================
a2enconf zabbix.conf
a2enmod ssl
a2ensite 000-default.conf
a2ensite default-ssl.conf
systemctl reload apache2
systemctl restart apache2
apache2ctl configtest
apache2ctl -S
7. Import database: (the database is also dumped from the old server)
===================
mysql -uroot zabbix -p -e "set global innodb_strict_mode='OFF';"
zcat /opt/zabbix_backup/zabbix_backup.sql.gz | mysql -h localhost -uroot -p zabbix
mysql -uroot zabbix -p -e "set global innodb_strict_mode='ON';"
8. Create swap space:
=====================
fallocate -l 4G /swapfile
chmod 600 /swapfile
mkswap /swapfile
swapon /swapfile
nano /etc/fstab
swapon --show
9. Start Zabbix components:
===========================
sudo systemctl restart zabbix-server zabbix-agent
sudo systemctl enable zabbix-server zabbix-agent apache2
10. Configure netplan:
======================
cat /etc/netplan/50-cloud-init.yaml > /etc/netplan/01-netcfg.yaml
nano /etc/netplan/01-netcfg.yaml
netplan generate
netplan apply
netplan try
cat /etc/resolv.conf
This procedure has worked before, but stopping the Zabbix server, dumping the database, using scp to move it to the new server, restoring it and waiting for the database upgrade sounds like it could take at least an hour.
In addition, there is more work to do on the infrastructure side, such as swapping the internal/public IP addresses, changing the hostname and editing DNS entries to avoid editing the Zabbix agent config for thousands of devices.
One potential solution I've come across would be to dump only the configuration tables from the Zabbix DB, while keeping the old server running.
That way I could fully deploy the new live server without downtime concerns.
However, I would eventually need to import all the history tables into the new installation, which can be very problematic if the schema doesn't match on even a single table.
Is this a viable approach?
I'd be using a modified zabbix-dump script (adapted to 5.0.11) from here - zabbix-backup/zabbix-dump at master · maxhq/zabbix-backup · GitHub to create the backup without history.
I would then dump only the history tables with the following command:
mysqldump --single-transaction -uroot -p zabbix acknowledges alerts auditlog auditlog_details event_recovery event_suppress event_tag events history history_log history_str history_text history_uint item_rtdata problem problem_tag task task_acknowledge task_check_now task_close_problem task_remote_command task_remote_command_result trends trends_uint > history.sql
Finally, I'd apply the "history.sql" file to the new database, which will insert all the data.