Ad Widget

Collapse

Upgrade DB Postgresql - primary keys - Postgresql scripts ?

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • michalb
    Junior Member
    • Apr 2018
    • 20

    #1

    Upgrade DB Postgresql - primary keys - Postgresql scripts ?

    Hello all
    I'm testing upgrade DB Postgresql 13.6 with TimescaleDB 2.3.1. On test server i have recovered DB and following example from Zabbix upgrade documents with one example on table "history_uint" . This example is fine , but i'm not DB guru and now i'm failing on table "history_log" .
    Used example but i modify for TEMP TABLE because is different structure . But script no create compression job id and timescaledb_information.jobs is like this :

    job_id | application_name | schedule_interval | max_runtime | max_retries | retry_period | proc_schema | proc_name | owner | scheduled | config | next_start | hypertable_schema | hypertable_name
    --------+---------------------------+-------------------+-------------+-------------+--------------+-----------------------+--------------------+----------+-----------+--------------------------------------------------+-------------------------------+-------------------+------------------
    1 | Telemetry Reporter [1] | 24:00:00 | 00:01:40 | -1 | 01:00:00 | _timescaledb_internal | policy_telemetry | postgres | t | | 2022-02-22 15:20:10.146532+01 | |
    1027 | Compression Policy [1027] | 1 day | 00:00:00 | -1 | 01:00:00 | _timescaledb_internal | policy_compression | zabbix | t | {"hypertable_id": 7, "compress_after": "871200"} | 2022-02-23 14:24:07.505675+01 | public | trends_uint
    1026 | Compression Policy [1026] | 1 day | 00:00:00 | -1 | 01:00:00 | _timescaledb_internal | policy_compression | zabbix | t | {"hypertable_id": 6, "compress_after": "871200"} | 2022-02-23 14:24:07.519807+01 | public | trends
    1025 | Compression Policy [1025] | 1 day | 00:00:00 | -1 | 01:00:00 | _timescaledb_internal | policy_compression | zabbix | t | {"hypertable_id": 3, "compress_after": "871200"} | 2022-02-23 14:24:10.82734+01 | public | history_log_old
    1024 | Compression Policy [1024] | 1 day | 00:00:00 | -1 | 01:00:00 | _timescaledb_internal | policy_compression | zabbix | t | {"hypertable_id": 4, "compress_after": "871200"} | 2022-02-23 14:24:12.637611+01 | public | history_text_old
    1023 | Compression Policy [1023] | 1 day | 00:00:00 | -1 | 01:00:00 | _timescaledb_internal | policy_compression | zabbix | t | {"hypertable_id": 5, "compress_after": "871200"} | 2022-02-23 14:24:16.015155+01 | public | history_str_old
    1021 | Compression Policy [1021] | 1 day | 00:00:00 | -1 | 01:00:00 | _timescaledb_internal | policy_compression | zabbix | t | {"hypertable_id": 1, "compress_after": "871200"} | 2022-02-23 14:25:08.925029+01 | public | history_old
    1022 | Compression Policy [1022] | 1 day | 00:00:00 | -1 | 01:00:00 | _timescaledb_internal | policy_compression | zabbix | t | {"hypertable_id": 2, "compress_after": "871200"} | 2022-02-23 14:31:00.486394+01 | public | history_uint_old



    Script :

    -- Verify that there is enough space to allow export of uncompressed data
    select sum(before_compression_total_bytes)/1024/1024 as before_compression_total_mbytes, sum(after_compression_total_bytes)/1024/1024 as after_compression_total_mbytes FROM chunk_compression_stats('history_old');
    -- Export data

    \copy (select * from history_log_old) TO '/var/lib/postgresql/tables/history_log.csv' DELIMITER ',' CSV

    CREATE TEMP TABLE temp_history_log (
    itemid bigint NOT NULL,
    clock integer DEFAULT '0' NOT NULL,
    timestamp integer DEFAULT '0' NOT NULL ,
    source varchar(64) DEFAULT '' NOT NULL ,
    severity integer DEFAULT '0' NOT NULL ,
    value text DEFAULT '0' NOT NULL,
    logeventid integer DEFAULT '0' NOT NULL ,
    ns integer DEFAULT '0' NOT NULL
    );
    -- Import data
    \copy temp_history_log FROM '/var/lib/postgresql/tables/history_log.csv' DELIMITER ',' CSV

    -- Create hypertable and populate it
    select create_hypertable('history_log', 'clock', chunk_time_interval => 86400, migrate_data => true);
    INSERT INTO history_log SELECT * FROM temp_history_log ON CONFLICT (itemid,clock,ns) DO NOTHING;

    -- Enable compression
    select set_integer_now_func('history_log', 'zbx_ts_unix_now', true);
    alter table history_log set (timescaledb.compress,timescaledb.compress_segment by='itemid',timescaledb.compress_orderby='clock,ns ');

    -- Substitute your schema in hypertable_schema
    -- Job id will returned, it should be passed to run_job
    select add_compression_policy('history_log', (
    select extract(epoch from (config::json->>'compress_after')::interval) from timescaledb_information.jobs where application_name like 'Compression%%' and hypertable_schema='public' and hypertable_name='history_log'
    )::integer
    );

    select alter_job((select job_id from timescaledb_information.jobs where hypertable_schema='public' and hypertable_name='history_log'), scheduled => true);

    -- Run compression job
    call run_job(<JOB_ID>);
    -- May show 'NOTICE: no chunks for hypertable public.history that satisfy compress chunk policy', it is fine.


    Output from script :

    zabbix=# \i /var/lib/postgresql/upgrage_history_log.sql
    before_compression_total_mbytes | after_compression_total_mbytes
    ---------------------------------+--------------------------------
    19412.445312500000 | 1161.7734375000000000
    (1 row)

    COPY 1351164
    CREATE TABLE
    COPY 1351164
    create_hypertable
    ---------------------------
    (15,public,history_log,t)
    (1 row)

    INSERT 0 1351164
    set_integer_now_func
    ----------------------

    (1 row)

    ALTER TABLE
    add_compression_policy
    ------------------------

    (1 row)

    psql:/var/lib/postgresql/upgrage_history_log.sql:35: ERROR: job ID cannot be NULL
    psql:/var/lib/postgresql/upgrage_history_log.sql:38: ERROR: syntax error at or near "<"
    LINE 1: call run_job(<JOB_ID>);



    Anybody help me with this problem or somebody has working scripts for DB upgrade ?


    With best regards

    Michal
  • Tuor
    Junior Member
    • Feb 2015
    • 16

    #2
    Hi Michal

    You can't just copy paste it. You need to read the comments and understand them.

    At the step, when you run the following command, the job id is printed. You need this job id!

    select add_compression_policy('history_log', (
    select extract(epoch from (config::json->>'compress_after')::interval) from timescaledb_information.jobs where application_name like 'Compression%%' and hypertable_schema='public' and hypertable_name='history_log'
    )::integer
    );
    At the following command you need to replace <JOB_ID> with your job id:
    call run_job(<JOB_ID>);

    Comment

    Working...