3 数据库升级至主键

概述

本节提供手动将现有安装中的表升级为主键的操作说明。

升级为主键可优化数据的索引和访问方式,可能加快queries并节省空间。 它还能改善集群环境中的数据管理和同步,有助于扩展并确保系统在某些服务器故障时仍保持可靠。

自Zabbix 6.0起,新安装的所有表均使用主键。

数据库不会自动升级为主键;但现有安装可在将Zabbix server升级至6.0或更高版本后手动升级。

提供以下数据库的升级指南:

本页提供的操作说明面向高级用户,可能需要根据您的具体配置进行调整。 升级为主键的过程可能耗时且占用大量资源。 请确保有足够的可用磁盘空间,根据您的数据库大小和存储数据量,该过程可能需要当前历史表使用空间的2.5倍。

重要说明

执行数据库升级步骤:

  1. 停止Zabbix server服务.

强烈建议在升级期间停止Zabbix server服务. 但在绝对必要时, 可在服务运行时执行升级(仅适用于MySQL、mariadb以及不带TimescaleDB的PostgreSQL).

  1. 备份数据库.
  2. 运行对应数据库的升级脚本.
  3. 启动Zabbix server服务.

仅需运行服务器数据库的升级脚本. proxy不会从此升级中受益.

若数据库使用了分区, 请联系数据库管理员或Zabbix技术支持获取帮助.

成功升级至主键后, 可移除CSV文件.

可选操作: 可将Zabbix前端切换至maintenance mode.

MySQL

导出和import操作必须在tmux/screen中执行,以确保会话不会中断。

另请参阅:重要说明

使用mysqlsh的MySQL 8.0+

此方法可在Zabbix server运行时使用,但建议在升级期间停止服务器。 MySQL Shell (mysqlsh) 必须installed并能够连接到数据库。

  • 以root用户(推荐)或具有FILE权限的任何用户登录MySQL控制台。

  • 启用local_infile变量后启动MySQL。

  • 通过运行history_pk_prepare.sql重命名旧表并create新表。

mysql -uzabbix -p<password> zabbix < /usr/share/zabbix-sql-scripts/mysql/history_pk_prepare.sql
  • 导出并import数据。

通过mysqlsh连接。如果使用套接字连接,可能需要指定路径。

sudo mysqlsh -uroot -S /run/mysqld/mysqld.sock --no-password -Dzabbix

运行(可根据需要更改CSVPATH):

CSVPATH="/var/lib/mysql-files";
       
       util.exportTable("history_old", CSVPATH + "/history.csv", { dialect: "csv" });
       util.importTable(CSVPATH + "/history.csv", {"dialect": "csv", "table": "history" });
       
       util.exportTable("history_uint_old", CSVPATH + "/history_uint.csv", { dialect: "csv" });
       util.importTable(CSVPATH + "/history_uint.csv", {"dialect": "csv", "table": "history_uint" });
       
       util.exportTable("history_str_old", CSVPATH + "/history_str.csv", { dialect: "csv" });
       util.importTable(CSVPATH + "/history_str.csv", {"dialect": "csv", "table": "history_str" });
       
       util.exportTable("history_log_old", CSVPATH + "/history_log.csv", { dialect: "csv" });
       util.importTable(CSVPATH + "/history_log.csv", {"dialect": "csv", "table": "history_log" });
       
       util.exportTable("history_text_old", CSVPATH + "/history_text.csv", { dialect: "csv" });
       util.importTable(CSVPATH + "/history_text.csv", {"dialect": "csv", "table": "history_text" });

不使用mysqlsh的MariaDB/MySQL 8.0+

此升级方法耗时较长,仅当无法使用mysqlsh进行升级时才应采用。

表升级
  • 以root用户(推荐)或具有FILE权限的任何用户登录MySQL控制台。

  • 启动MySQL时启用local_infile变量。

  • 通过执行history_pk_prepare.sql重命名旧表并create新表:

mysql -uzabbix -p<password> zabbix < /usr/share/zabbix-sql-scripts/mysql/history_pk_prepare.sql
停止服务迁移

必须在迁移数据前禁用max_execution_time以避免迁移过程中超时。

SET @@max_execution_time=0;
       
       INSERT IGNORE INTO history SELECT * FROM history_old;
       INSERT IGNORE INTO history_uint SELECT * FROM history_uint_old;
       INSERT IGNORE INTO history_str SELECT * FROM history_str_old;
       INSERT IGNORE INTO history_log SELECT * FROM history_log_old;
       INSERT IGNORE INTO history_text SELECT * FROM history_text_old;

按照迁移后说明删除旧表。

运行服务迁移

检查import/export功能启用的路径:

mysql> SELECT @@secure_file_priv;
       +-----------------------+
       | @@secure_file_priv    |
       +-----------------------+
       | /var/lib/mysql-files/ |
       +-----------------------+

secure_file_priv值为目录路径,则export/import操作仅对该目录下的文件有效。此时需相应调整queries中的文件路径,或在升级期间将secure_file_priv值设为空string。

secure_file_priv值为空,则export/import可从任意位置执行。

secure_file_priv值为NULL,需将其设置为包含导出表数据的路径(例如上文中的'/var/lib/mysql-files/')。

更多信息请参阅 MySQL documentation

导出数据前必须禁用max_execution_time以避免export过程中超时。

SET @@max_execution_time=0;
       
       SELECT * INTO OUTFILE '/var/lib/mysql-files/history.csv' FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n' FROM history_old;
       LOAD DATA INFILE '/var/lib/mysql-files/history.csv' IGNORE INTO TABLE history FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n';
       
       SELECT * INTO OUTFILE '/var/lib/mysql-files/history_uint.csv' FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n' FROM history_uint_old;
       LOAD DATA INFILE '/var/lib/mysql-files/history_uint.csv' IGNORE INTO TABLE history_uint FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n';
       
       SELECT * INTO OUTFILE '/var/lib/mysql-files/history_str.csv' FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n' FROM history_str_old;
       LOAD DATA INFILE '/var/lib/mysql-files/history_str.csv' IGNORE INTO TABLE history_str FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n';
       
       SELECT * INTO OUTFILE '/var/lib/mysql-files/history_log.csv' FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n' FROM history_log_old;
       LOAD DATA INFILE '/var/lib/mysql-files/history_log.csv' IGNORE INTO TABLE history_log FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n';
       
       SELECT * INTO OUTFILE '/var/lib/mysql-files/history_text.csv' FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n' FROM history_text_old;
       LOAD DATA INFILE '/var/lib/mysql-files/history_text.csv' IGNORE INTO TABLE history_text FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n';

按照迁移后说明删除旧表。

PostgreSQL

导出和import操作必须在tmux/screen会话中执行,以确保会话不会中断。 对于使用TimescaleDB的安装,请跳过本节并转到PostgreSQL + TimescaleDB

另请参阅:重要注意事项

表升级

  • 使用history_pk_prepare.sql重命名表:
sudo -u zabbix psql zabbix < /usr/share/zabbix-sql-scripts/postgresql/history_pk_prepare.sql

停止服务迁移

  • 导出当前历史记录,将其import到临时表, 然后将数据插入新表同时忽略重复项:
INSERT INTO history SELECT * FROM history_old ON CONFLICT (itemid,clock,ns) DO NOTHING;
       
       INSERT INTO history_uint SELECT * FROM history_uint_old ON CONFLICT (itemid,clock,ns) DO NOTHING;
       
       INSERT INTO history_str SELECT * FROM history_str_old ON CONFLICT (itemid,clock,ns) DO NOTHING;
       
       INSERT INTO history_log SELECT * FROM history_log_old ON CONFLICT (itemid,clock,ns) DO NOTHING;
       
       INSERT INTO history_text SELECT * FROM history_text_old ON CONFLICT (itemid,clock,ns) DO NOTHING;

提升INSERT性能的技巧参见:PostgreSQL: Bulk Loading Huge Amounts of DataCheckpoint Distance and Amount of WAL

运行中的服务器迁移

  • 导出当前历史记录,import到临时表, 然后插入数据到新表时忽略重复项:
\copy history_old TO '/tmp/history.csv' DELIMITER ',' CSV
       CREATE TEMP TABLE temp_history (
           itemid                   bigint                                    NOT NULL,
           clock                    integer         DEFAULT '0'               NOT NULL,
           value                    DOUBLE PRECISION DEFAULT '0.0000'          NOT NULL,
           ns                       integer         DEFAULT '0'               NOT NULL
       );
       \copy temp_history FROM '/tmp/history.csv' DELIMITER ',' CSV
       INSERT INTO history SELECT * FROM temp_history ON CONFLICT (itemid,clock,ns) DO NOTHING;
       
       \copy history_uint_old TO '/tmp/history_uint.csv' DELIMITER ',' CSV
       CREATE TEMP TABLE temp_history_uint (
           itemid                   bigint                                    NOT NULL,
           clock                    integer         DEFAULT '0'               NOT NULL,
           value                    numeric(20)     DEFAULT '0'               NOT NULL,
           ns                       integer         DEFAULT '0'               NOT NULL
       );
       \copy temp_history_uint FROM '/tmp/history_uint.csv' DELIMITER ',' CSV
       INSERT INTO history_uint SELECT * FROM temp_history_uint ON CONFLICT (itemid,clock,ns) DO NOTHING;
       
       \copy history_str_old TO '/tmp/history_str.csv' DELIMITER ',' CSV
       CREATE TEMP TABLE temp_history_str (
           itemid                   bigint                                    NOT NULL,
           clock                    integer         DEFAULT '0'               NOT NULL,
           value                    varchar(255)    DEFAULT ''                NOT NULL,
           ns                       integer         DEFAULT '0'               NOT NULL
       );
       \copy temp_history_str FROM '/tmp/history_str.csv' DELIMITER ',' CSV
       INSERT INTO history_str (itemid,clock,value,ns) SELECT * FROM temp_history_str ON CONFLICT (itemid,clock,ns) DO NOTHING;
       
       \copy history_log_old TO '/tmp/history_log.csv' DELIMITER ',' CSV
       CREATE TEMP TABLE temp_history_log (
           itemid                   bigint                                    NOT NULL,
           clock                    integer         DEFAULT '0'               NOT NULL,
           timestamp                integer         DEFAULT '0'               NOT NULL,
           source                   varchar(64)     DEFAULT ''                NOT NULL,
           severity                 integer         DEFAULT '0'               NOT NULL,
           value                    text            DEFAULT ''                NOT NULL,
           logeventid               integer         DEFAULT '0'               NOT NULL,
           ns                       integer         DEFAULT '0'               NOT NULL
       );
       \copy temp_history_log FROM '/tmp/history_log.csv' DELIMITER ',' CSV
       INSERT INTO history_log SELECT * FROM temp_history_log ON CONFLICT (itemid,clock,ns) DO NOTHING;
       
       \copy history_text_old TO '/tmp/history_text.csv' DELIMITER ',' CSV
       CREATE TEMP TABLE temp_history_text (
           itemid                   bigint                                    NOT NULL,
           clock                    integer         DEFAULT '0'               NOT NULL,
           value                    text            DEFAULT ''                NOT NULL,
           ns                       integer         DEFAULT '0'               NOT NULL
       );
       \copy temp_history_text FROM '/tmp/history_text.csv' DELIMITER ',' CSV
       INSERT INTO history_text SELECT * FROM temp_history_text ON CONFLICT (itemid,clock,ns) DO NOTHING;

PostgreSQL + TimescaleDB

导出和import操作必须在tmux/screen中执行以确保会话不会中断. 升级期间Zabbix server应处于关闭状态.

另请参阅: 重要说明

  • 使用history_pk_prepare.sql重命名表.
sudo -u zabbix psql zabbix < /usr/share/zabbix-sql-scripts/postgresql/history_pk_prepare.sql

自Zabbix 6.0.9起,history_pk_prepare.sql的路径已被changed. 对于早于6.0.9的版本,应使用以下命令:

sudo -u zabbix psql zabbix < /usr/share/doc/zabbix-sql-scripts/postgresql/history_pk_prepare.sql
  • 根据压缩设置运行TimescaleDB hypertable迁移脚本(兼容TSDB v2.x和v1.x的version):
    • 如果启用了压缩(默认安装),从/usr/share/zabbix-sql-scripts/postgresql/tsdb_history_pk_upgrade_with_compression执行run脚本:

      cat /usr/share/zabbix-sql-scripts/postgresql/tsdb_history_pk_upgrade_with_compression/history_pk.sql | sudo -u zabbix psql zabbix
             cat /usr/share/zabbix-sql-scripts/postgresql/tsdb_history_pk_upgrade_with_compression/history_pk_uint.sql | sudo -u zabbix psql zabbix
             cat /usr/share/zabbix-sql-scripts/postgresql/tsdb_history_pk_upgrade_with_compression/history_pk_log.sql | sudo -u zabbix psql zabbix
             cat /usr/share/zabbix-sql-scripts/postgresql/tsdb_history_pk_upgrade_with_compression/history_pk_str.sql | sudo -u zabbix psql zabbix
             cat /usr/share/zabbix-sql-scripts/postgresql/tsdb_history_pk_upgrade_with_compression/history_pk_text.sql | sudo -u zabbix psql zabbix
    • 如果禁用了压缩,从/usr/share/zabbix-sql-scripts/postgresql/tsdb_history_pk_upgrade_no_compression执行run脚本:

      cat /usr/share/zabbix-sql-scripts/postgresql/tsdb_history_pk_upgrade_no_compression/history_pk.sql | sudo -u zabbix psql zabbix
             cat /usr/share/zabbix-sql-scripts/postgresql/tsdb_history_pk_upgrade_no_compression/history_pk_uint.sql | sudo -u zabbix psql zabbix
             cat /usr/share/zabbix-sql-scripts/postgresql/tsdb_history_pk_upgrade_no_compression/history_pk_log.sql | sudo -u zabbix psql zabbix
             cat /usr/share/zabbix-sql-scripts/postgresql/tsdb_history_pk_upgrade_no_compression/history_pk_str.sql | sudo -u zabbix psql zabbix
             cat /usr/share/zabbix-sql-scripts/postgresql/tsdb_history_pk_upgrade_no_compression/history_pk_text.sql | sudo -u zabbix psql zabbix

另请参阅: Tips以提升INSERT性能.

Oracle

导出和import操作必须在tmux/screen中执行,以确保会话不会中断。 升级期间Zabbix server应处于关闭状态。

另请参阅:重要注意事项

表升级

性能优化技巧请参考Oracle Data Pump documentation

  • 使用history_pk_prepare.sql命令重命名表。
cd /path/to/zabbix-sources/database/oracle/option-patches
       sqlplus zabbix/password@oracle_host/service
       sqlplus> @history_pk_prepare.sql

历史表的批量迁移

  • 为Data Pump准备目录

Data Pump必须对这些目录具有读写权限

示例:

mkdir -pv /export/history
       chown -R oracle:oracle /export
  • 创建目录object并授予Zabbix认证用户(下例中的'zabbix')对该object的读写权限。在sysdba角色下,run:
create directory history as '/export/history';
       grant read,write on directory history to zabbix;
  • 导出表。将N替换为所需的线程数
expdp zabbix/password@oracle_host/service \
           DIRECTORY=history \
           TABLES=history_old,history_uint_old,history_str_old,history_log_old,history_text_old \
           PARALLEL=N
  • 导入表。将N替换为所需的线程数
impdp zabbix/password@oracle_host/service \
           DIRECTORY=history \
           TABLES=history_uint_old \
        REMAP_TABLE=history_old:history,history_uint_old:history_uint,history_str_old:history_str,history_log_old:history_log,history_text_old:history_text \
           data_options=SKIP_CONSTRAINT_ERRORS table_exists_action=APPEND  PARALLEL=N CONTENT=data_only

历史表单独迁移

  • 为每个历史表准备Data Pump目录 Data Pump必须对这些目录具有读写权限

示例:

mkdir -pv /export/history /export/history_uint /export/history_str /export/history_log /export/history_text
       chown -R oracle:oracle /export
  • 创建目录object并授予Zabbix认证用户(下例中的'zabbix')对该object的读写权限。在sysdba角色下,run:
create directory history as '/export/history';
       grant read,write on directory history to zabbix;
       
       create directory history_uint as '/export/history_uint';
       grant read,write on directory history_uint to zabbix;
       
       create directory history_str as '/export/history_str';
       grant read,write on directory history_str to zabbix;
       
       create directory history_log as '/export/history_log';
       grant read,write on directory history_log to zabbix;
       
       create directory history_text as '/export/history_text';
       grant read,write on directory history_text to zabbix;
  • 导出并import每个表。将N替换为所需的线程数
expdp zabbix/password@oracle_host:1521/xe DIRECTORY=history TABLES=history_old PARALLEL=N
       
       impdp zabbix/password@oracle_host:1521/xe DIRECTORY=history TABLES=history_old REMAP_TABLE=history_old:history data_options=SKIP_CONSTRAINT_ERRORS table_exists_action=APPEND PARALLEL=N CONTENT=data_only
       
       expdp zabbix/password@oracle_host:1521/xe DIRECTORY=history_uint TABLES=history_uint_old PARALLEL=N
       
       impdp zabbix/password@oracle_host:1521/xe DIRECTORY=history_uint TABLES=history_uint_old REMAP_TABLE=history_uint_old:history_uint data_options=SKIP_CONSTRAINT_ERRORS table_exists_action=APPEND PARALLEL=N CONTENT=data_only
       
       expdp zabbix/password@oracle_host:1521/xe DIRECTORY=history_str TABLES=history_str_old PARALLEL=N
       
       impdp zabbix/password@oracle_host:1521/xe DIRECTORY=history_str TABLES=history_str_old REMAP_TABLE=history_str_old:history_str data_options=SKIP_CONSTRAINT_ERRORS table_exists_action=APPEND PARALLEL=N CONTENT=data_only
       
       expdp zabbix/password@oracle_host:1521/xe DIRECTORY=history_log TABLES=history_log_old PARALLEL=N
       
       impdp zabbix/password@oracle_host:1521/xe DIRECTORY=history_log TABLES=history_log_old REMAP_TABLE=history_log_old:history_log data_options=SKIP_CONSTRAINT_ERRORS table_exists_action=APPEND PARALLEL=N CONTENT=data_only
       
       expdp zabbix/password@oracle_host:1521/xe DIRECTORY=history_text TABLES=history_text_old PARALLEL=N
       
       impdp zabbix/password@oracle_host:1521/xe DIRECTORY=history_text TABLES=history_text_old REMAP_TABLE=history_text_old:history_text data_options=SKIP_CONSTRAINT_ERRORS table_exists_action=APPEND PARALLEL=N CONTENT=data_only

迁移后操作

对于所有数据库,迁移完成后需执行以下操作:

  • 验证所有功能是否按预期工作

  • 删除旧表:

DROP TABLE history_old;
       DROP TABLE history_uint_old;
       DROP TABLE history_str_old;
       DROP TABLE history_log_old;
       DROP TABLE history_text_old;