Hello,
I'm in the process to migrate to TimescaleDB and I'm wondering if using preprocessing discard step is still pertinent, from a storage usage and performance perspective considering the compression features of TimescaleDB. I found that using discard in a preprocessing step makes writing triggers difficult in certain situation since they are not evaluated if the item value does not change. If I could get rid of the discard step and keep a low update interval (< 5min) for my items, I'd rather do it.
I didn't explore the schema used by zabbix or if the segmentby colums are used (timescaledb.compress_segmentby, timescaledb.compress_orderby) to make the compression efficient (https://docs.timescale.com/timescale...#order-entries).
EDIT:
It's right here in the source code, with compress_segmentby='itemid' :
Any opinions ?
Thank you.
I'm in the process to migrate to TimescaleDB and I'm wondering if using preprocessing discard step is still pertinent, from a storage usage and performance perspective considering the compression features of TimescaleDB. I found that using discard in a preprocessing step makes writing triggers difficult in certain situation since they are not evaluated if the item value does not change. If I could get rid of the discard step and keep a low update interval (< 5min) for my items, I'd rather do it.
I didn't explore the schema used by zabbix or if the segmentby colums are used (timescaledb.compress_segmentby, timescaledb.compress_orderby) to make the compression efficient (https://docs.timescale.com/timescale...#order-entries).
EDIT:
It's right here in the source code, with compress_segmentby='itemid' :
Code:
DBexecute("alter table %s set (timescaledb.compress,timescaledb.compress_segmentby='%s',"
"timescaledb.compress_orderby='%s')", table_name, ZBX_TS_SEGMENT_BY,
(ZBX_COMPRESS_TABLE_HISTORY == type) ? "clock,ns" : "clock");
Thank you.
Comment