Ad Widget

Collapse

housekeeper cannot delete data compressed timescaledb

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • gralech
    Junior Member
    • Oct 2020
    • 12

    #1

    housekeeper cannot delete data compressed timescaledb

    Good day!

    I'm using docker zabbix server 5.0 psql+timescaledb

    Receiving many errors in server log:
    [delete from history_uint where itemid=112833 and ctid = any(array(select ctid from history_uint where itemid=112833 limit 50000))]
    312:20201019:140231.386 [Z3005] query failed: [0] PGRES_FATAL_ERROR:ERROR: transparent decompression only supports tableoid system column

    Compression enabled in Administration-General-Housekeeping

    Please advice how to fix it?
  • AVP.Riga
    Member
    • Oct 2020
    • 59

    #2
    Good day,

    Could you please share Housekeeper configurations?

    Comment

    • gralech
      Junior Member
      • Oct 2020
      • 12

      #3
      /etc/zabbix/zabbix_server.conf
      HousekeepingFrequency=1
      MaxHousekeeperDelete=50000

      Click image for larger version

Name:	2020-10-20_114458.png
Views:	2354
Size:	52.7 KB
ID:	411091

      Comment


      • Catwoolfii
        Catwoolfii commented
        Editing a comment
        I ran into this problem, as it turned out, it was necessary to enable the redefinition of the storage time of history and trends (for timescaledb).
    • AVP.Riga
      Member
      • Oct 2020
      • 59

      #4
      Good day,

      You can try to play with these settings:
      /etc/zabbix/zabbix_server.conf
      Code:
      HousekeepingFrequency=1
      MaxHousekeeperDelete=50000
      for Example
      /etc/zabbix/zabbix_server.conf
      Code:
      HousekeepingFrequency=1
      MaxHousekeeperDelete=150000
      or
      /etc/zabbix/zabbix_server.conf
      Code:
      HousekeepingFrequency=2
      MaxHousekeeperDelete=75000
      Last edited by AVP.Riga; 20-10-2020, 13:59.

      Comment

      • gralech
        Junior Member
        • Oct 2020
        • 12

        #5
        Thank you, i will try it

        Comment

        • GOID
          Member
          • Oct 2014
          • 35

          #6
          Hi!
          Same issue. In zabbix_server log records:
          Code:
          ...
          903243:20201021:004031.866 [Z3005] query failed: [0] PGRES_FATAL_ERROR:ERROR: cannot update/delete rows from chunk "_hyper_1_12_chunk" as it is compressed
          [delete from history where itemid=29170 and clock<1601253541]
          903243:20201021:004031.871 [Z3005] query failed: [0] PGRES_FATAL_ERROR:ERROR: cannot update/delete rows from chunk "_hyper_1_12_chunk" as it is compressed
          [delete from history where itemid=29171 and clock<1601253551]
          903243:20201021:004031.876 [Z3005] query failed: [0] PGRES_FATAL_ERROR:ERROR: cannot update/delete rows from chunk "_hyper_1_12_chunk" as it is compressed
          [delete from history where itemid=29172 and clock<1601253541]
          ...
          903243:20201021:102538.652 [Z3005] query failed: [0] PGRES_FATAL_ERROR:ERROR: transparent decompression only supports tableoid system column
          [delete from trends_uint where itemid=42469 and ctid = any(array(select ctid from trends_uint where itemid=42469 limit 10000))]
          903243:20201021:102538.652 [Z3005] query failed: [0] PGRES_FATAL_ERROR:ERROR: transparent decompression only supports tableoid system column
          [delete from trends_uint where itemid=40065 and ctid = any(array(select ctid from trends_uint where itemid=40065 limit 10000))]
          903243:20201021:102538.653 [Z3005] query failed: [0] PGRES_FATAL_ERROR:ERROR: transparent decompression only supports tableoid system column
          [delete from trends_uint where itemid=44169 and ctid = any(array(select ctid from trends_uint where itemid=44169 limit 10000))]
          Code:
          HousekeepingFrequency=1
          MaxHousekeeperDelete=10000
          Click image for larger version  Name:	zabbix_conf.png Views:	0 Size:	33.1 KB ID:	411247

          If i correct understand logic - with TimescaleDB and options on screenshot - zabbix must drop chunks not rows.
          Attached Files
          Last edited by GOID; 21-10-2020, 20:47.

          Comment

          • GOID
            Member
            • Oct 2014
            • 35

            #7
            After restart zabbix_server service (with enabled override) - seems housekeeper not trying delete rows anymore
            Code:
            1504746:20201023:091429.657 executing housekeeper
            1504746:20201023:091430.394 housekeeper [deleted 0 hist/trends, 0 items/triggers, 3601 events, 9221 problems, 0 sessions, 0 alarms, 0 audit, 0 records in 0.735347 sec, idle for 1 hour(s)]
            1504746:20201023:101430.830 executing housekeeper
            1504746:20201023:101431.562 housekeeper [deleted 0 hist/trends, 0 items/triggers, 3601 events, 9545 problems, 0 sessions, 0 alarms, 0 audit, 0 records in 0.729855 sec, idle for 1 hour(s)]
            1504746:20201023:111431.989 executing housekeeper
            1504746:20201023:111432.695 housekeeper [deleted 0 hist/trends, 0 items/triggers, 3602 events, 9609 problems, 0 sessions, 0 alarms, 0 audit, 0 records in 0.703752 sec, idle for 1 hour(s)]
            And additional - before restart postgres no one chunk compressed. After restart:
            Code:
            2020-10-22 22:43:31.806 MSK [2033423] LOG: job 1006 completed compressing chunk
            2020-10-22 22:43:31.806 MSK [2033424] LOG: job 1004 completed compressing chunk
            2020-10-22 22:43:31.821 MSK [2033425] LOG: job 1005 completed compressing chunk
            2020-10-22 22:43:32.359 MSK [2033426] LOG: completed compressing chunk _timescaledb_internal._hyper_5_130_chunk
            2020-10-22 22:43:32.425 MSK [2033426] LOG: job 1002 completed compressing chunk
            2020-10-22 22:43:34.464 MSK [2033439] LOG: completed compressing chunk _timescaledb_internal._hyper_4_129_chunk
            2020-10-22 22:43:34.536 MSK [2033439] LOG: job 1003 completed compressing chunk
            2020-10-22 22:43:35.540 MSK [2033453] LOG: completed compressing chunk _timescaledb_internal._hyper_2_128_chunk
            2020-10-22 22:43:35.619 MSK [2033453] LOG: job 1001 completed compressing chunk
            2020-10-22 22:44:22.057 MSK [2033740] LOG: completed compressing chunk _timescaledb_internal._hyper_1_127_chunk
            2020-10-22 22:44:22.269 MSK [2033740] LOG: job 1000 completed compressing chunk
            Last edited by GOID; 23-10-2020, 10:53.

            Comment

            • GOID
              Member
              • Oct 2014
              • 35

              #8
              Digging problem started from this records in PG log
              Code:
              2020-10-20 10:06:57.453 MSK [628011] _HERE_ACCOUNT_@_HERE_DB_ ERROR: cannot update/delete rows from chunk "_hyper_1_1_chunk" as it is compressed
              2020-10-20 10:06:57.453 MSK [628011] _HERE_ACCOUNT_@_HERE_DB_ STATEMENT: delete from history where itemid=10073 and clock<1601050553
              2020-10-20 10:06:57.458 MSK [628011] _HERE_ACCOUNT_@_HERE_DB_ ERROR: cannot update/delete rows from chunk "_hyper_1_1_chunk" as it is compressed
              2020-10-20 10:06:57.458 MSK [628011] _HERE_ACCOUNT_@_HERE_DB_ STATEMENT: delete from history where itemid=10074 and clock<1601050554
              2020-10-20 10:06:57.462 MSK [628011] _HERE_ACCOUNT_@_HERE_DB_ ERROR: cannot update/delete rows from chunk "_hyper_1_1_chunk" as it is compressed
              2020-10-20 10:06:57.462 MSK [628011] _HERE_ACCOUNT_@_HERE_DB_ STATEMENT: delete from history where itemid=10075 and clock<1601050555
              2020-10-20 10:06:57.467 MSK [628011] _HERE_ACCOUNT_@_HERE_DB_ ERROR: cannot update/delete rows from chunk "_hyper_1_1_chunk" as it is compressed
              2020-10-20 10:06:57.467 MSK [628011] _HERE_ACCOUNT_@_HERE_DB_ STATEMENT: delete from history where itemid=10076 and clock<1601050556

              Comment

              • yurtesen
                Senior Member
                • Aug 2008
                • 130

                #9
                Are you sure the issue was housekeeper? Because housekeeper should delete items based on `clock<XXX`.
                It looks like you were deleting some items and their history did not get deleted in some cases:
                Eg.
                903243:20201021:102538.652 [Z3005] query failed: [0] PGRES_FATAL_ERROR:ERROR: transparent decompression only supports tableoid system column [delete from trends_uint where itemid=42469 and ctid = any(array(select ctid from trends_uint where itemid=42469 limit 10000))]
                I am assuming when override is on, the item history is not deleted at all....interesting...

                Comment

                • Nolaan
                  Junior Member
                  • Mar 2018
                  • 17

                  #10
                  I'm a little confused.
                  If in the trends I want to keep some data for 180 days, another 360 days, and others for 10 years, then I have to turn off compression for Timescaledb?

                  If I want to use compression, do I have to keep all the data in trends for 10 years?

                  Comment

                  Working...