3 数据库升级至主键

概述

本节提供手动升级现有安装中的表至主键的操作指南。

升级至主键可优化数据索引与访问方式,可能加速queries并节省存储空间。同时能提升集群环境下的数据管理与同步效率,有助于扩展系统规模,并确保在部分服务器故障时系统仍保持可靠。

自Zabbix 6.0起,所有新安装的表均默认采用主键结构。

系统不会自动执行主键数据库升级,但现有安装可在将Zabbix server升级至6.0或更高版本后,通过手动方式完成升级。

操作指南适用于:

本页提供的操作指南专为高级用户设计,可能需要根据您的具体配置进行调整。 升级至主键可能耗时且占用大量资源。 请确保有足够的可用磁盘空间;根据数据库大小及存储数据量,该过程可能需要占用历史表当前使用空间的2.5倍。

重要说明

执行数据库升级步骤:

  1. 停止Zabbix server服务。

强烈建议在升级期间停止Zabbix server服务。但在绝对必要的情况下,可在服务运行时执行升级(仅适用于MySQL、mariadb以及未使用TimescaleDB的PostgreSQL)。

  1. 备份数据库。
  2. 运行对应数据库的升级脚本。
  3. 启动Zabbix server服务。

仅需对服务器数据库运行升级脚本。proxy不会从此升级中受益。

若数据库使用分区表,请联系数据库管理员或Zabbix技术支持获取帮助。

成功升级至主键后,可移除CSV文件。

可选操作:可将Zabbix前端切换至maintenance mode

MySQL

导出和导入操作必须在tmux/screen会话中执行,以确保会话不会中断。

另请参阅:重要说明

使用mysqlsh的MySQL 8.0+

此方法可在Zabbix server运行时使用,但建议在升级期间停止服务器。 MySQL Shell (mysqlsh) 必须installed并能够连接到数据库。

  • 以root用户(推荐)或具有FILE权限的任何用户登录MySQL控制台。

  • 启动MySQL时启用local_infile变量。

  • 通过运行history_pk_prepare.sql重命名旧表并创建新表。

mysql -uzabbix -p<password> zabbix < /usr/share/zabbix-sql-scripts/mysql/history_pk_prepare.sql
  • Export and import data.

Connect via mysqlsh. If using a socket connection, specifying the path might be required.

sudo mysqlsh -uroot -S /run/mysqld/mysqld.sock --no-password -Dzabbix

Run (CSVPATH can be changed as needed):

CSVPATH="/var/lib/mysql-files";
       
       util.exportTable("history_old", CSVPATH + "/history.csv", { dialect: "csv" });
       util.importTable(CSVPATH + "/history.csv", {"dialect": "csv", "table": "history" });
       
       util.exportTable("history_uint_old", CSVPATH + "/history_uint.csv", { dialect: "csv" });
       util.importTable(CSVPATH + "/history_uint.csv", {"dialect": "csv", "table": "history_uint" });
       
       util.exportTable("history_str_old", CSVPATH + "/history_str.csv", { dialect: "csv" });
       util.importTable(CSVPATH + "/history_str.csv", {"dialect": "csv", "table": "history_str" });
       
       util.exportTable("history_log_old", CSVPATH + "/history_log.csv", { dialect: "csv" });
       util.importTable(CSVPATH + "/history_log.csv", {"dialect": "csv", "table": "history_log" });
       
       util.exportTable("history_text_old", CSVPATH + "/history_text.csv", { dialect: "csv" });
       util.importTable(CSVPATH + "/history_text.csv", {"dialect": "csv", "table": "history_text" });

不使用mysqlsh的MariaDB/MySQL 8.0+

此升级方法耗时较长,仅当无法使用mysqlsh进行升级时才应采用。

表升级
  • 以root用户(推荐)或具有FILE权限的用户登录MySQL控制台。

  • 启动MySQL时启用local_infile变量。

  • 通过执行history_pk_prepare.sql重命名旧表并创建新表:

mysql -uzabbix -p<password> zabbix < /usr/share/zabbix-sql-scripts/mysql/history_pk_prepare.sql
Migration with stopped server

max_execution_time must be disabled before migrating data to avoid timeout during migration.

{.sql}
       SET @@max_execution_time=0;

       INSERT IGNORE INTO history SELECT * FROM history_old;
       INSERT IGNORE INTO history_uint SELECT * FROM history_uint_old;
       INSERT IGNORE INTO history_str SELECT * FROM history_str_old;
       INSERT IGNORE INTO history_log SELECT * FROM history_log_old;
       INSERT IGNORE INTO history_text SELECT * FROM history_text_old;
       

Follow post-migration instructions to drop the old tables.

Migration with running server

Check for which paths import/export is enabled:

{.sql}
       mysql> SELECT @@secure_file_priv;
       +-----------------------+
       | @@secure_file_priv    |
       +-----------------------+
       | /var/lib/mysql-files/ |
       +-----------------------+
       

If secure_file_priv value is a path to a directory, export/import will be performed for files in that directory. In this case, edit paths to files in queries accordingly or set the secure_file_priv value to an empty string for the upgrade time.

If secure_file_priv value is empty, export/import can be performed from any location.

If secure_file_priv value is NULL, set it to the path that contains exported table data ('/var/lib/mysql-files/' in the example above).

For more information, see MySQL documentation.

max_execution_time must be disabled before exporting data to avoid timeout during export.

{.sql}
       SET @@max_execution_time=0;

       SELECT * INTO OUTFILE '/var/lib/mysql-files/history.csv' FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n' FROM history_old;
       LOAD DATA INFILE '/var/lib/mysql-files/history.csv' IGNORE INTO TABLE history FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n';

       SELECT * INTO OUTFILE '/var/lib/mysql-files/history_uint.csv' FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n' FROM history_uint_old;
       LOAD DATA INFILE '/var/lib/mysql-files/history_uint.csv' IGNORE INTO TABLE history_uint FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n';

       SELECT * INTO OUTFILE '/var/lib/mysql-files/history_str.csv' FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n' FROM history_str_old;
       LOAD DATA INFILE '/var/lib/mysql-files/history_str.csv' IGNORE INTO TABLE history_str FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n';

       SELECT * INTO OUTFILE '/var/lib/mysql-files/history_log.csv' FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n' FROM history_log_old;
       LOAD DATA INFILE '/var/lib/mysql-files/history_log.csv' IGNORE INTO TABLE history_log FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n';

       SELECT * INTO OUTFILE '/var/lib/mysql-files/history_text.csv' FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n' FROM history_text_old;
       LOAD DATA INFILE '/var/lib/mysql-files/history_text.csv' IGNORE INTO TABLE history_text FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n';

按照迁移后操作指南删除旧表。

PostgreSQL

导出和导入操作必须在tmux/screen会话中执行,以确保会话不会中断。 对于使用TimescaleDB的安装,请跳过本节并转到PostgreSQL + TimescaleDB

另请参阅:重要注意事项

表升级

  • 使用history_pk_prepare.sql重命名表:
sudo -u zabbix psql zabbix < /usr/share/zabbix-sql-scripts/postgresql/history_pk_prepare.sql

Migration with stopped server

  • Export current history, import it to the temp table, then insert the data into new tables while ignoring duplicates:
INSERT INTO history SELECT * FROM history_old ON CONFLICT (itemid,clock,ns) DO NOTHING;
       
       INSERT INTO history_uint SELECT * FROM history_uint_old ON CONFLICT (itemid,clock,ns) DO NOTHING;
       
       INSERT INTO history_str SELECT * FROM history_str_old ON CONFLICT (itemid,clock,ns) DO NOTHING;
       
       INSERT INTO history_log SELECT * FROM history_log_old ON CONFLICT (itemid,clock,ns) DO NOTHING;
       
       INSERT INTO history_text SELECT * FROM history_text_old ON CONFLICT (itemid,clock,ns) DO NOTHING;

查看提升INSERT性能的技巧:PostgreSQL: Bulk Loading Huge Amounts of DataCheckpoint Distance and Amount of WAL

运行中的服务器迁移

  • 导出当前历史数据,导入到临时表,然后将数据插入新表并忽略重复项:
\copy history_old TO '/tmp/history.csv' DELIMITER ',' CSV
       CREATE TEMP TABLE temp_history (
           itemid                   bigint                                    NOT NULL,
           clock                    integer         DEFAULT '0'               NOT NULL,
           value                    DOUBLE PRECISION DEFAULT '0.0000'          NOT NULL,
           ns                       integer         DEFAULT '0'               NOT NULL
       );
       \copy temp_history FROM '/tmp/history.csv' DELIMITER ',' CSV
       INSERT INTO history SELECT * FROM temp_history ON CONFLICT (itemid,clock,ns) DO NOTHING;
       
       \copy history_uint_old TO '/tmp/history_uint.csv' DELIMITER ',' CSV
       CREATE TEMP TABLE temp_history_uint (
           itemid                   bigint                                    NOT NULL,
           clock                    integer         DEFAULT '0'               NOT NULL,
           value                    numeric(20)     DEFAULT '0'               NOT NULL,
           ns                       integer         DEFAULT '0'               NOT NULL
       );
       \copy temp_history_uint FROM '/tmp/history_uint.csv' DELIMITER ',' CSV
       INSERT INTO history_uint SELECT * FROM temp_history_uint ON CONFLICT (itemid,clock,ns) DO NOTHING;
       
       \copy history_str_old TO '/tmp/history_str.csv' DELIMITER ',' CSV
       CREATE TEMP TABLE temp_history_str (
           itemid                   bigint                                    NOT NULL,
           clock                    integer         DEFAULT '0'               NOT NULL,
           value                    varchar(255)    DEFAULT ''                NOT NULL,
           ns                       integer         DEFAULT '0'               NOT NULL
       );
       \copy temp_history_str FROM '/tmp/history_str.csv' DELIMITER ',' CSV
       INSERT INTO history_str (itemid,clock,value,ns) SELECT * FROM temp_history_str ON CONFLICT (itemid,clock,ns) DO NOTHING;
       
       \copy history_log_old TO '/tmp/history_log.csv' DELIMITER ',' CSV
       CREATE TEMP TABLE temp_history_log (
           itemid                   bigint                                    NOT NULL,
           clock                    integer         DEFAULT '0'               NOT NULL,
           timestamp                integer         DEFAULT '0'               NOT NULL,
           source                   varchar(64)     DEFAULT ''                NOT NULL,
           severity                 integer         DEFAULT '0'               NOT NULL,
           value                    text            DEFAULT ''                NOT NULL,
           logeventid               integer         DEFAULT '0'               NOT NULL,
           ns                       integer         DEFAULT '0'               NOT NULL
       );
       \copy temp_history_log FROM '/tmp/history_log.csv' DELIMITER ',' CSV
       INSERT INTO history_log SELECT * FROM temp_history_log ON CONFLICT (itemid,clock,ns) DO NOTHING;
       
       \copy history_text_old TO '/tmp/history_text.csv' DELIMITER ',' CSV
       CREATE TEMP TABLE temp_history_text (
           itemid                   bigint                                    NOT NULL,
           clock                    integer         DEFAULT '0'               NOT NULL,
           value                    text            DEFAULT ''                NOT NULL,
           ns                       integer         DEFAULT '0'               NOT NULL
       );
       \copy temp_history_text FROM '/tmp/history_text.csv' DELIMITER ',' CSV
       INSERT INTO history_text SELECT * FROM temp_history_text ON CONFLICT (itemid,clock,ns) DO NOTHING;

PostgreSQL + TimescaleDB

导出和导入操作必须在tmux/screen会话中执行,以确保会话不会中断。升级期间Zabbix server应处于关闭状态。

另请参阅:重要注意事项

  • 使用history_pk_prepare.sql重命名表:
sudo -u zabbix psql zabbix < /usr/share/zabbix-sql-scripts/postgresql/history_pk_prepare.sql

::: noteimportant The path to history_pk_prepare.sql has been changed since Zabbix 6.0.9. For versions older than 6.0.9, the following command should be used:

sudo -u zabbix psql zabbix < /usr/share/doc/zabbix-sql-scripts/postgresql/history_pk_prepare.sql
  • 根据压缩设置运行TimescaleDB超表迁移脚本(兼容TSDB v2.x和v1.x版本):
    • 如果启用了压缩(默认安装),从/usr/share/zabbix-sql-scripts/postgresql/tsdb_history_pk_upgrade_with_compression运行脚本:

      cat /usr/share/zabbix-sql-scripts/postgresql/tsdb_history_pk_upgrade_with_compression/history_pk.sql | sudo -u zabbix psql zabbix
             cat /usr/share/zabbix-sql-scripts/postgresql/tsdb_history_pk_upgrade_with_compression/history_pk_uint.sql | sudo -u zabbix psql zabbix
             cat /usr/share/zabbix-sql-scripts/postgresql/tsdb_history_pk_upgrade_with_compression/history_pk_log.sql | sudo -u zabbix psql zabbix
             cat /usr/share/zabbix-sql-scripts/postgresql/tsdb_history_pk_upgrade_with_compression/history_pk_str.sql | sudo -u zabbix psql zabbix
             cat /usr/share/zabbix-sql-scripts/postgresql/tsdb_history_pk_upgrade_with_compression/history_pk_text.sql | sudo -u zabbix psql zabbix
    • 如果禁用了压缩,从/usr/share/zabbix-sql-scripts/postgresql/tsdb_history_pk_upgrade_no_compression运行脚本:

      cat /usr/share/zabbix-sql-scripts/postgresql/tsdb_history_pk_upgrade_no_compression/history_pk.sql | sudo -u zabbix psql zabbix
             cat /usr/share/zabbix-sql-scripts/postgresql/tsdb_history_pk_upgrade_no_compression/history_pk_uint.sql | sudo -u zabbix psql zabbix
             cat /usr/share/zabbix-sql-scripts/postgresql/tsdb_history_pk_upgrade_no_compression/history_pk_log.sql | sudo -u zabbix psql zabbix
             cat /usr/share/zabbix-sql-scripts/postgresql/tsdb_history_pk_upgrade_no_compression/history_pk_str.sql | sudo -u zabbix psql zabbix
             cat /usr/share/zabbix-sql-scripts/postgresql/tsdb_history_pk_upgrade_no_compression/history_pk_text.sql | sudo -u zabbix psql zabbix

另请参阅:Tips以提升INSERT性能。

Oracle

导出和导入操作必须在tmux/screen会话中执行,以确保会话不会中断。升级期间应保持Zabbix server处于关闭状态。

另请参阅:重要注意事项

表升级

性能优化技巧请参考Oracle Data Pump documentation文档。

  • 使用history_pk_prepare.sql命令重命名表。
cd /path/to/zabbix-sources/database/oracle/option-patches
       sqlplus zabbix/password@oracle_host/service
       sqlplus> @history_pk_prepare.sql

历史表的批量迁移

  • 为Data Pump准备目录

Data Pump必须对这些目录具有读写权限。

示例:

mkdir -pv /export/history
       chown -R oracle:oracle /export
  • Create a directory object and grant read and write permissions to this object to the user used for Zabbix authentication ('zabbix' in the example below). Under sysdba role, run:
{.sql}
       create directory history as '/export/history';
       grant read,write on directory history to zabbix;
       
  • Export tables. Replace N with the desired thread count.
{.bash}
       expdp zabbix/password@oracle_host/service \
           DIRECTORY=history \
           TABLES=history_old,history_uint_old,history_str_old,history_log_old,history_text_old \
           PARALLEL=N
       
  • Import tables. Replace N with the desired thread count.
{.bash}
       impdp zabbix/password@oracle_host/service \
           DIRECTORY=history \
           TABLES=history_uint_old \
        REMAP_TABLE=history_old:history,history_uint_old:history_uint,history_str_old:history_str,history_log_old:history_log,history_text_old:history_text \
           data_options=SKIP_CONSTRAINT_ERRORS table_exists_action=APPEND  PARALLEL=N CONTENT=data_only

历史表单独迁移

  • 为每个历史表准备Data Pump所需的目录 Data Pump必须对这些目录具有读写权限

示例:

mkdir -pv /export/history /export/history_uint /export/history_str /export/history_log /export/history_text
       chown -R oracle:oracle /export
  • Create a directory object and grant read and write permissions to this object to the user used for Zabbix authentication ('zabbix' in the example below). Under sysdba role, run:
{.sql}
       create directory history as '/export/history';
       grant read,write on directory history to zabbix;

       create directory history_uint as '/export/history_uint';
       grant read,write on directory history_uint to zabbix;

       create directory history_str as '/export/history_str';
       grant read,write on directory history_str to zabbix;

       create directory history_log as '/export/history_log';
       grant read,write on directory history_log to zabbix;

       create directory history_text as '/export/history_text';
       grant read,write on directory history_text to zabbix;
       
  • Export and import each table. Replace N with the desired thread count.
{.bash}
       expdp zabbix/password@oracle_host:1521/xe DIRECTORY=history TABLES=history_old PARALLEL=N

       impdp zabbix/password@oracle_host:1521/xe DIRECTORY=history TABLES=history_old REMAP_TABLE=history_old:history data_options=SKIP_CONSTRAINT_ERRORS table_exists_action=APPEND PARALLEL=N CONTENT=data_only

       expdp zabbix/password@oracle_host:1521/xe DIRECTORY=history_uint TABLES=history_uint_old PARALLEL=N

       impdp zabbix/password@oracle_host:1521/xe DIRECTORY=history_uint TABLES=history_uint_old REMAP_TABLE=history_uint_old:history_uint data_options=SKIP_CONSTRAINT_ERRORS table_exists_action=APPEND PARALLEL=N CONTENT=data_only

       expdp zabbix/password@oracle_host:1521/xe DIRECTORY=history_str TABLES=history_str_old PARALLEL=N

       impdp zabbix/password@oracle_host:1521/xe DIRECTORY=history_str TABLES=history_str_old REMAP_TABLE=history_str_old:history_str data_options=SKIP_CONSTRAINT_ERRORS table_exists_action=APPEND PARALLEL=N CONTENT=data_only

       expdp zabbix/password@oracle_host:1521/xe DIRECTORY=history_log TABLES=history_log_old PARALLEL=N

       impdp zabbix/password@oracle_host:1521/xe DIRECTORY=history_log TABLES=history_log_old REMAP_TABLE=history_log_old:history_log data_options=SKIP_CONSTRAINT_ERRORS table_exists_action=APPEND PARALLEL=N CONTENT=data_only

       expdp zabbix/password@oracle_host:1521/xe DIRECTORY=history_text TABLES=history_text_old PARALLEL=N

       impdp zabbix/password@oracle_host:1521/xe DIRECTORY=history_text TABLES=history_text_old REMAP_TABLE=history_text_old:history_text data_options=SKIP_CONSTRAINT_ERRORS table_exists_action=APPEND PARALLEL=N CONTENT=data_only

迁移后操作

对于所有数据库,在迁移完成后,请执行以下操作:

  • 验证所有功能是否按预期工作。

  • 删除旧表:

DROP TABLE history_old;
       DROP TABLE history_uint_old;
       DROP TABLE history_str_old;
       DROP TABLE history_log_old;
       DROP TABLE history_text_old;