This is a translation of the original English documentation page. Help us make it better.

3 Оновлення бази даних до первинних ключів

Огляд

Починаючи з Zabbix 6.0, первинні ключі використовуються для всіх таблиць у нових інсталяціях.

У цьому розділі наведено інструкції щодо ручного оновлення таблиць історії в існуючих інсталяціях до первинних ключів.

Інструкції доступні для:

Інструкції на цій сторінці призначені для досвідчених користувачів. Зауважте, що ці інструкції, можливо, доведеться скоригувати для вашої конкретної конфігурації.

Important notes

  • Make sure to back up the database before the upgrade
  • If your database uses partitions, contact your DB administrator or Zabbix support team for help
  • The CSV files can be removed after a successful upgrade to primary keys

MySQL

Експорт і імпорт потрібно виконувати в tmux/screen, щоб гарантувати, що сеанс не буде перервано.

Дивіться також: Важливі примітки

MySQL 8.0+ with mysqlsh

This method can be used with a running Zabbix server, but it is recommended to stop the server for the time of the upgrade. The MySQL Shell (mysqlsh) must be installed and able to connect to the DB.

  • Log in to MySQL console as root (recommended) or as any user with FILE privileges.

  • Start MySQL with local_infile variable enabled.

  • Rename old tables and create new tables by running history_pk_prepare.sql.

mysql -uzabbix -p<password> zabbix < /usr/share/doc/zabbix-sql-scripts/mysql/history_pk_prepare.sql
  • Export and import data.

Connect via mysqlsh. If using a socket connection, specifying the path might be required.

sudo mysqlsh -uroot -S /run/mysqld/mysqld.sock --no-password -Dzabbix

Run (CSVPATH can be changed as needed):

CSVPATH="/var/lib/mysql-files";
       
       util.exportTable("history_old", CSVPATH + "/history.csv", { dialect: "csv" });
       util.importTable(CSVPATH + "/history.csv", {"dialect": "csv", "table": "history" });
       
       util.exportTable("history_uint_old", CSVPATH + "/history_uint.csv", { dialect: "csv" });
       util.importTable(CSVPATH + "/history_uint.csv", {"dialect": "csv", "table": "history_uint" });
       
       util.exportTable("history_str_old", CSVPATH + "/history_str.csv", { dialect: "csv" });
       util.importTable(CSVPATH + "/history_str.csv", {"dialect": "csv", "table": "history_str" });
       
       util.exportTable("history_log_old", CSVPATH + "/history_log.csv", { dialect: "csv" });
       util.importTable(CSVPATH + "/history_log.csv", {"dialect": "csv", "table": "history_log" });
       
       util.exportTable("history_text_old", CSVPATH + "/history_text.csv", { dialect: "csv" });
       util.importTable(CSVPATH + "/history_text.csv", {"dialect": "csv", "table": "history_text" });

MariaDB/MySQL 8.0+ without mysqlsh

This upgrade method takes more time and should be used only if an upgrade with mysqlsh is not possible.

Table upgrade
  • Log in to MySQL console as root (recommended) or any user with FILE privileges.

  • Start MySQL with local_infile variable enabled.

  • Rename old tables and create new tables by running history_pk_prepare.sql:

mysql -uzabbix -p<password> zabbix < /usr/share/doc/zabbix-sql-scripts/mysql/history_pk_prepare.sql
Migration with stopped server

max_execution_time must be disabled before migrating data to avoid timeout during migration.

SET @@max_execution_time=0;
       
       INSERT IGNORE INTO history SELECT * FROM history_old;
       INSERT IGNORE INTO history_uint SELECT * FROM history_uint_old;
       INSERT IGNORE INTO history_str SELECT * FROM history_str_old;
       INSERT IGNORE INTO history_log SELECT * FROM history_log_old;
       INSERT IGNORE INTO history_text SELECT * FROM history_text_old;

Follow post-migration instructions to drop the old tables.

Migration with running server

Check for which paths import/export is enabled:

mysql> SELECT @@secure_file_priv;
       +-----------------------+
       | @@secure_file_priv    |
       +-----------------------+
       | /var/lib/mysql-files/ |
       +-----------------------+

If secure_file_priv value is a path to a directory, export/import will be performed for files in that directory. In this case, edit paths to files in queries accordingly or set the secure_file_priv value to an empty string for the upgrade time.

If secure_file_priv value is empty, export/import can be performed from any location.

If secure_file_priv value is NULL, set it to the path that contains exported table data ('/var/lib/mysql-files/' in the example above).

For more information, see MySQL documentation.

max_execution_time must be disabled before exporting data to avoid timeout during export.

SET @@max_execution_time=0;
       
       SELECT * INTO OUTFILE '/var/lib/mysql-files/history.csv' FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n' FROM history_old;
       LOAD DATA INFILE '/var/lib/mysql-files/history.csv' IGNORE INTO TABLE history FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n';
       
       SELECT * INTO OUTFILE '/var/lib/mysql-files/history_uint.csv' FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n' FROM history_uint_old;
       LOAD DATA INFILE '/var/lib/mysql-files/history_uint.csv' IGNORE INTO TABLE history_uint FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n';
       
       SELECT * INTO OUTFILE '/var/lib/mysql-files/history_str.csv' FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n' FROM history_str_old;
       LOAD DATA INFILE '/var/lib/mysql-files/history_str.csv' IGNORE INTO TABLE history_str FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n';
       
       SELECT * INTO OUTFILE '/var/lib/mysql-files/history_log.csv' FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n' FROM history_log_old;
       LOAD DATA INFILE '/var/lib/mysql-files/history_log.csv' IGNORE INTO TABLE history_log FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n';
       
       SELECT * INTO OUTFILE '/var/lib/mysql-files/history_text.csv' FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n' FROM history_text_old;
       LOAD DATA INFILE '/var/lib/mysql-files/history_text.csv' IGNORE INTO TABLE history_text FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n';

Follow post-migration instructions to drop the old tables.

PostgreSQL

Export and import must be performed in tmux/screen, so that the session isn't dropped.

See also: Important notes

Upgrading tables
  • Rename tables using history_pk_prepare.sql.
sudo -u zabbix psql zabbix < /usr/share/doc/zabbix-sql-scripts/postgresql/history_pk_prepare.sql
  • Export current history, import it to the temp table, and insert it into new tables while ignoring duplicates
\copy history_old TO '/tmp/history.csv' DELIMITER ',' CSV
       CREATE TEMP TABLE temp_history (
           itemid                   bigint                                    NOT NULL,
           clock                    integer         DEFAULT '0'               NOT NULL,
           value                    DOUBLE PRECISION DEFAULT '0.0000'          NOT NULL,
           ns                       integer         DEFAULT '0'               NOT NULL
       );
       \copy temp_history FROM '/tmp/history.csv' DELIMITER ',' CSV
       INSERT INTO history_uint SELECT * FROM temp_history ON CONFLICT (itemid,clock,ns) DO NOTHING;

       \copy history_uint_old TO '/tmp/history_uint.csv' DELIMITER ',' CSV
       CREATE TEMP TABLE temp_history_uint (
           itemid                   bigint                                    NOT NULL,
           clock                    integer         DEFAULT '0'               NOT NULL,
           value                    numeric(20)     DEFAULT '0'               NOT NULL,
           ns                       integer         DEFAULT '0'               NOT NULL
       );
       \copy temp_history_uint FROM '/tmp/history_uint.csv' DELIMITER ',' CSV
       INSERT INTO history_uint SELECT * FROM temp_history_uint ON CONFLICT (itemid,clock,ns) DO NOTHING;

       \copy history_str_old TO '/tmp/history_str.csv' DELIMITER ',' CSV
       CREATE TEMP TABLE temp_history_str (
           itemid                   bigint                                    NOT NULL,
           clock                    integer         DEFAULT '0'               NOT NULL,
           value                    varchar(255)    DEFAULT ''                NOT NULL,
           ns                       integer         DEFAULT '0'               NOT NULL
       );
       \copy temp_history_str FROM '/tmp/history_str.csv' DELIMITER ',' CSV
       INSERT INTO history_str (itemid,clock,value,ns) SELECT * FROM temp_history_str ON CONFLICT (itemid,clock,ns) DO NOTHING;

       \copy history_log_old TO '/tmp/history_log.csv' DELIMITER ',' CSV
       CREATE TEMP TABLE temp_history_log (
           itemid                   bigint                                    NOT NULL,
           clock                    integer         DEFAULT '0'               NOT NULL,
           timestamp                integer         DEFAULT '0'               NOT NULL,
           source                   varchar(64)     DEFAULT ''                NOT NULL,
           severity                 integer         DEFAULT '0'               NOT NULL,
           value                    text            DEFAULT ''                NOT NULL,
           logeventid               integer         DEFAULT '0'               NOT NULL,
           ns                       integer         DEFAULT '0'               NOT NULL
       );
       \copy temp_history_log FROM '/tmp/history_log.csv' DELIMITER ',' CSV
       INSERT INTO history_log SELECT * FROM temp_history_log ON CONFLICT (itemid,clock,ns) DO NOTHING;

       \copy history_text_old TO '/tmp/history_text.csv' DELIMITER ',' CSV
       CREATE TEMP TABLE temp_history_text (
           itemid                   bigint                                    NOT NULL,
           clock                    integer         DEFAULT '0'               NOT NULL,
           value                    text            DEFAULT ''                NOT NULL,
           ns                       integer         DEFAULT '0'               NOT NULL
       );
       \copy temp_history_text FROM '/tmp/history_text.csv' DELIMITER ',' CSV
       INSERT INTO history_text SELECT * FROM temp_history_text ON CONFLICT (itemid,clock,ns) DO NOTHING;
  • Verify that everything works as supposed

  • Drop old tables

DROP TABLE history_old;
       DROP TABLE history_uint_old;
       DROP TABLE history_str_old;
       DROP TABLE history_log_old;
       DROP TABLE history_text_old;

Consider using the following tips to improve insert performance:

Migration with running server

  • Export current history, import it to the temp table, then insert the data into new tables while ignoring duplicates:
\copy history_old TO '/tmp/history.csv' DELIMITER ',' CSV
       CREATE TEMP TABLE temp_history (
           itemid                   bigint                                    NOT NULL,
           clock                    integer         DEFAULT '0'               NOT NULL,
           value                    DOUBLE PRECISION DEFAULT '0.0000'          NOT NULL,
           ns                       integer         DEFAULT '0'               NOT NULL
       );
       \copy temp_history FROM '/tmp/history.csv' DELIMITER ',' CSV
       INSERT INTO history SELECT * FROM temp_history ON CONFLICT (itemid,clock,ns) DO NOTHING;
       
       \copy history_uint_old TO '/tmp/history_uint.csv' DELIMITER ',' CSV
       CREATE TEMP TABLE temp_history_uint (
           itemid                   bigint                                    NOT NULL,
           clock                    integer         DEFAULT '0'               NOT NULL,
           value                    numeric(20)     DEFAULT '0'               NOT NULL,
           ns                       integer         DEFAULT '0'               NOT NULL
       );
       \copy temp_history_uint FROM '/tmp/history_uint.csv' DELIMITER ',' CSV
       INSERT INTO history_uint SELECT * FROM temp_history_uint ON CONFLICT (itemid,clock,ns) DO NOTHING;
       
       \copy history_str_old TO '/tmp/history_str.csv' DELIMITER ',' CSV
       CREATE TEMP TABLE temp_history_str (
           itemid                   bigint                                    NOT NULL,
           clock                    integer         DEFAULT '0'               NOT NULL,
           value                    varchar(255)    DEFAULT ''                NOT NULL,
           ns                       integer         DEFAULT '0'               NOT NULL
       );
       \copy temp_history_str FROM '/tmp/history_str.csv' DELIMITER ',' CSV
       INSERT INTO history_str (itemid,clock,value,ns) SELECT * FROM temp_history_str ON CONFLICT (itemid,clock,ns) DO NOTHING;
       
       \copy history_log_old TO '/tmp/history_log.csv' DELIMITER ',' CSV
       CREATE TEMP TABLE temp_history_log (
           itemid                   bigint                                    NOT NULL,
           clock                    integer         DEFAULT '0'               NOT NULL,
           timestamp                integer         DEFAULT '0'               NOT NULL,
           source                   varchar(64)     DEFAULT ''                NOT NULL,
           severity                 integer         DEFAULT '0'               NOT NULL,
           value                    text            DEFAULT ''                NOT NULL,
           logeventid               integer         DEFAULT '0'               NOT NULL,
           ns                       integer         DEFAULT '0'               NOT NULL
       );
       \copy temp_history_log FROM '/tmp/history_log.csv' DELIMITER ',' CSV
       INSERT INTO history_log SELECT * FROM temp_history_log ON CONFLICT (itemid,clock,ns) DO NOTHING;
       
       \copy history_text_old TO '/tmp/history_text.csv' DELIMITER ',' CSV
       CREATE TEMP TABLE temp_history_text (
           itemid                   bigint                                    NOT NULL,
           clock                    integer         DEFAULT '0'               NOT NULL,
           value                    text            DEFAULT ''                NOT NULL,
           ns                       integer         DEFAULT '0'               NOT NULL
       );
       \copy temp_history_text FROM '/tmp/history_text.csv' DELIMITER ',' CSV
       INSERT INTO history_text SELECT * FROM temp_history_text ON CONFLICT (itemid,clock,ns) DO NOTHING;

TimescaleDB v1.x

Export and import must be performed in tmux/screen, so that the session isn't dropped.

See also: Important notes

Upgrading tables
  • Rename tables using history_pk_prepare.sql.
sudo -u zabbix psql zabbix < /usr/share/doc/zabbix-sql-scripts/postgresql/history_pk_prepare.sql
  • Example of upgrading for one table:
-- Verify that there is enough space to allow export of uncompressed data
       select sum(before_compression_total_bytes)/1024/1024 as before_compression_total_mbytes, sum(after_compression_total_bytes)/1024/1024 as after_compression_total_mbytes FROM chunk_compression_stats('history_uint_old');

       -- Export data
       \copy (select * from history_uint_old) TO '/tmp/history_uint.csv' DELIMITER ',' CSV

       CREATE TEMP TABLE temp_history_uint (
           itemid                   bigint                                    NOT NULL,
           clock                    integer         DEFAULT '0'               NOT NULL,
           value                    numeric(20)     DEFAULT '0'               NOT NULL,
           ns                       integer         DEFAULT '0'               NOT NULL
       );
       -- Import data
       \copy temp_history_uint FROM '/tmp/history_uint.csv' DELIMITER ',' CSV

       -- Create hypertable and populate it
       select create_hypertable('history_uint', 'clock', chunk_time_interval => 86400, migrate_data => true);
       INSERT INTO history_uint SELECT * FROM temp_history_uint ON CONFLICT (itemid,clock,ns) DO NOTHING;

       -- Enable compression
       select set_integer_now_func('history_uint', 'zbx_ts_unix_now', true);
       alter table history_uint set (timescaledb.compress,timescaledb.compress_segmentby='itemid',timescaledb.compress_orderby='clock,ns');

       -- Job id will returned, it should be passed to run_job
       select add_compress_chunks_policy('history_uint', (
               select (p.older_than).integer_interval from _timescaledb_config.bgw_policy_compress_chunks p 
               inner join _timescaledb_catalog.hypertable h on (h.id=p.hypertable_id) where h.table_name='history_uint'
           )::integer
       );

       select alter_job((select job_id from timescaledb_information.jobs where hypertable_schema='public' and hypertable_name='history_uint'), scheduled => true);

       -- Run compression job
       call run_job(<JOB_ID>);
       -- May show 'NOTICE:  no chunks for hypertable public.history_uint that satisfy compress chunk policy', it is fine.
  • Verify that everything works as supposed

  • Drop old tables

DROP TABLE history_old;
       DROP TABLE history_uint_old;
       DROP TABLE history_str_old;
       DROP TABLE history_log_old;
       DROP TABLE history_text_old;

See also: Tips for improving PostgreSQL insert performance

TimescaleDB v2.x

Export and import must be performed in tmux/screen, so that the session isn't dropped.

See also: Important notes

Upgrading tables
  • Rename tables using history_pk_prepare.sql.
sudo -u zabbix psql zabbix < /usr/share/doc/zabbix-sql-scripts/postgresql/history_pk_prepare.sql
  • Example of upgrading for one table:
-- Verify that there is enough space to allow export of uncompressed data
       select sum(before_compression_total_bytes)/1024/1024 as before_compression_total_mbytes, sum(after_compression_total_bytes)/1024/1024 as after_compression_total_mbytes FROM chunk_compression_stats('history_uint_old');

       -- Export data
       \copy (select * from history_uint_old) TO '/tmp/history_uint.csv' DELIMITER ',' CSV

       CREATE TEMP TABLE temp_history_uint (
           itemid                   bigint                                    NOT NULL,
           clock                    integer         DEFAULT '0'               NOT NULL,
           value                    numeric(20)     DEFAULT '0'               NOT NULL,
           ns                       integer         DEFAULT '0'               NOT NULL
       );
       -- Import data
       \copy temp_history_uint FROM '/tmp/history_uint.csv' DELIMITER ',' CSV

       -- Create hypertable and populate it
       select create_hypertable('history_uint', 'clock', chunk_time_interval => 86400, migrate_data => true);
       INSERT INTO history_uint SELECT * FROM temp_history_uint ON CONFLICT (itemid,clock,ns) DO NOTHING;

       -- Enable compression
       select set_integer_now_func('history_uint', 'zbx_ts_unix_now', true);
       alter table history_uint set (timescaledb.compress,timescaledb.compress_segmentby='itemid',timescaledb.compress_orderby='clock,ns');

       -- Substitute your schema in hypertable_schema
       -- Job id will returned, it should be passed to run_job
       select add_compression_policy('history_uint', (
           select extract(epoch from (config::json->>'compress_after')::interval) from timescaledb_information.jobs where application_name like 'Compression%%' and hypertable_schema='public' and hypertable_name='history_uint_old'
           )::integer
       );

       select alter_job((select job_id from timescaledb_information.jobs where hypertable_schema='public' and hypertable_name='history_uint'), scheduled => true);

       -- Run compression job
       call run_job(<JOB_ID>);
       -- May show 'NOTICE:  no chunks for hypertable public.history_uint that satisfy compress chunk policy', it is fine.
  • Verify that everything works as supposed

  • Drop old tables

DROP TABLE history_old;
       DROP TABLE history_uint_old;
       DROP TABLE history_str_old;
       DROP TABLE history_log_old;
       DROP TABLE history_text_old;

See also: Tips for improving PostgreSQL insert performance

Batch migration of history tables

  • Prepare directories for Data Pump.

Data Pump must have read and write permissions to these directories.

Example:

mkdir -pv /export/history
       chown -R oracle:oracle /export
  • Create a directory object and grant read and write permissions to this object to the user used for Zabbix authentication ('zabbix' in the example below). Under sysdba role, run:
create directory history as '/export/history';
       grant read,write on directory history to zabbix;
  • Export tables. Replace N with the desired thread count.
expdp zabbix/password@oracle_host/service \
           DIRECTORY=history \
           TABLES=history_old,history_uint_old,history_str_old,history_log_old,history_text_old \
           PARALLEL=N
  • Import tables. Replace N with the desired thread count.
impdp zabbix/password@oracle_host/service \
           DIRECTORY=history \
           TABLES=history_uint_old \
        REMAP_TABLE=history_old:history,history_uint_old:history_uint,history_str_old:history_str,history_log_old:history_log,history_text_old:history_text \
           data_options=SKIP_CONSTRAINT_ERRORS table_exists_action=APPEND  PARALLEL=N CONTENT=data_only

Oracle

Export and import must be performed in tmux/screen, so that the session isn't dropped.

See also: Important notes

Importing/exporting history tables in one attempt

Additionally, consider performance tips for Oracle Data Pump.

  • Rename tables using history_pk_prepare.sql.
shell> cd /path/to/zabbix-sources/database/oracle
       shell> sqlplus zabbix/password@oracle_host/ORCL
       sqlplus> @history_pk_prepare.sql
  • Prepare directories for datapump

Example:

# mkdir -pv /export/history
       # chown -R oracle:oracle /export
  • Create a directory object, grant permissions to it. Run the following under sysdba role:
create directory history as '/export/history';
       grant read,write on directory history to zabbix;
  • Export tables. Replace N with your desired thread count.
expdp zabbix/[email protected]:1521/z \
           DIRECTORY=history \
           TABLES=history_old,history_uint_old,history_str_old,history_log_old,history_text_old \
           PARALLEL=N
  • Import tables. Replace N with your desired thread count.
impdp zabbix/[email protected]:1521/z \
           DIRECTORY=history \
           TABLES=history_uint_old \
        REMAP_TABLE=history_old:history,history_uint_old:history_uint,history_str_old:history_str,history_log_old:history_log,history_text_old:history_text \
           data_options=SKIP_CONSTRAINT_ERRORS table_exists_action=APPEND  PARALLEL=N CONTENT=data_only
  • Verify that everything works as supposed

  • Drop old tables

DROP TABLE history_old;
       DROP TABLE history_uint_old;
       DROP TABLE history_str_old;
       DROP TABLE history_log_old;
       DROP TABLE history_text_old;
Importing/exporting history tables individually

Additionally, consider performance tips for Oracle Data Pump.

  • Rename tables using history_pk_prepare.sql.
shell> cd /path/to/zabbix-sources/database/oracle
       shell> sqlplus zabbix/password@oracle_host/ORCL
       sqlplus> @history_pk_prepare.sql
  • Prepare directories for datapump

Example:

# mkdir -pv /export/history /export/history_uint /export/history_str /export/history_log /export/history_text
       # chown -R oracle:oracle /export
  • Create a directory object, grant permissions to it. Run the following under sysdba role:
create directory history as '/export/history';
       grant read,write on directory history to zabbix;

       create directory history_uint as '/export/history_uint';
       grant read,write on directory history_uint to zabbix;

       create directory history_str as '/export/history_str';
       grant read,write on directory history_str to zabbix;

       create directory history_log as '/export/history_log';
       grant read,write on directory history_log to zabbix;

       create directory history_text as '/export/history_text';
       grant read,write on directory history_text to zabbix;
  • Export and import each table. Replace N with your desired thread count.
expdp zabbix/[email protected]:1521/xe DIRECTORY=history TABLES=history_old PARALLEL=N

       impdp zabbix/[email protected]:1521/xe DIRECTORY=history TABLES=history_old REMAP_TABLE=history_old:history data_options=SKIP_CONSTRAINT_ERRORS table_exists_action=APPEND PARALLEL=N CONTENT=data_only

       expdp zabbix/[email protected]:1521/xe DIRECTORY=history_uint TABLES=history_uint_old PARALLEL=N

       impdp zabbix/[email protected]:1521/xe DIRECTORY=history_uint TABLES=history_uint_old REMAP_TABLE=history_uint_old:history_uint data_options=SKIP_CONSTRAINT_ERRORS table_exists_action=APPEND PARALLEL=N CONTENT=data_only

       expdp zabbix/[email protected]:1521/xe DIRECTORY=history_str TABLES=history_str_old PARALLEL=N

       impdp zabbix/[email protected]:1521/xe DIRECTORY=history_str TABLES=history_str_old REMAP_TABLE=history_str_old:history_str data_options=SKIP_CONSTRAINT_ERRORS table_exists_action=APPEND PARALLEL=N CONTENT=data_only

       expdp zabbix/[email protected]:1521/xe DIRECTORY=history_log TABLES=history_log_old PARALLEL=N

       impdp zabbix/[email protected]:1521/xe DIRECTORY=history_log TABLES=history_log_old REMAP_TABLE=history_log_old:history_log data_options=SKIP_CONSTRAINT_ERRORS table_exists_action=APPEND PARALLEL=N CONTENT=data_only

       expdp zabbix/[email protected]:1521/xe DIRECTORY=history_text TABLES=history_text_old PARALLEL=N

       impdp zabbix/[email protected]:1521/xe DIRECTORY=history_text TABLES=history_text_old REMAP_TABLE=history_text_old:history_text data_options=SKIP_CONSTRAINT_ERRORS table_exists_action=APPEND PARALLEL=N CONTENT=data_only
  • Verify that everything works as supposed

  • Drop old tables

DROP TABLE history_old;
       DROP TABLE history_uint_old;
       DROP TABLE history_str_old;
       DROP TABLE history_log_old;
       DROP TABLE history_text_old;

Після міграції

Для всіх баз даних після завершення міграції виконайте такі дії:

  • Переконайтеся, що все працює належним чином.

  • Видалити старі таблиці:

DROP TABLE history_old;
       DROP TABLE history_uint_old;
       DROP TABLE history_str_old;
       DROP TABLE history_log_old;
       DROP TABLE history_text_old;