Ad Widget

Collapse

zabbix postgresql support

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • tekknokrat
    Senior Member
    • Sep 2008
    • 140

    #1

    zabbix postgresql support

    Hi,
    as we have choosen postgresql db-backend and this is the tenth bug with postgresql query let zabbix-server dies I have to ask the developers:
    What scenarios are you testing zabbix with postgres and why does things like this happens:

    Code:
     10909:20090204:202827 [Z3005] Query failed: [0] PGRES_FATAL_ERROR:ERROR:  duplicate key value violates unique constraint "trends_uint_pkey"
     [insert into trends_uint (clock,itemid,num,value_min,value_avg,value_max) values (1233774000,22981,1,4150464512,4150464512,4150464512)]
     10909:20090204:202827 [Z3005] Query failed: [0] PGRES_FATAL_ERROR:ERROR:  current transaction is aborted, commands ignored until end of transaction block
     [update items set nextcheck=1233777496,prevvalue=lastvalue,prevorgvalue=NULL,lastvalue='4150464512',lastclock=1233775696 where itemid=22981]
     10909:20090204:202827 [Z3005] Query failed: [0] PGRES_FATAL_ERROR:ERROR:  current transaction is aborted, commands ignored until end of transaction block
     [select distinct function,parameter,itemid,lastvalue from functions where itemid=22981]
     10905:20090204:202828 One child process died. Exiting ...
     10905:20090204:202830 ZABBIX Server stopped. ZABBIX 1.6.2.
    I have since now seen no thread or bug-report were we ever get a feedback from one of the developers. Are you kidding or just ignoring things?
    I mean I have no problems to help and doing things suggested to me but with no little feedback from upstream - no chance

    Please give one little statement or at least the good advise:
    "choose another database backend, because postgresql-support is just experimental"
    Last edited by tekknokrat; 05-02-2009, 17:09.
  • tekknokrat
    Senior Member
    • Sep 2008
    • 140

    #2
    next one today:

    Code:
    [Z3005] Query failed: [0] PGRES_FATAL_ERROR:ERROR:  invalid byte sequence for encoding "UTF8": 0x99
    HINT:  This error can also happen if the byte sequence does not match the encoding expected by the server, which is controlled by "client_encoding".
     [insert into history_log (id,clock,itemid,timestamp,value,source,severity) values (122810,1233846163,27555,0,'Feb  5 14:41:35 192.168.102.32 proxy:[local7.n
    otice] proxy: 10.13.91.218 - - [05/Feb/2009:14:41:35 +0000] "GET http://example.com/index.php HTTP/1.1" 302 0 "-" "Skype<99> 3.2" B4C1AAC30002BD23 FF:FF:FF:FF:FF:FF','',0)]

    Comment

    • NOB
      Senior Member
      Zabbix Certified Specialist
      • Mar 2007
      • 469

      #3
      Hi

      we had the same but perhaps less dramatic experiences with versions 1.4.x.
      From 1.4.2 the PostgreSQL backend was fine.
      That's why we decided to use MySQL.

      The quality of version 1.6.x is worse because of lot of changes, e.g.,
      the new DBSyncer and in the distributed monitoring, and lot of other
      activities in the background for new customers, I guess.
      We still don't want to use the 1.6.2 release in an productive environment with
      distributed monitoring, even with MySQL.

      For the moment we'll keep using the 1.4.x branch, although the more you
      install, the more (parallel !) work you have to do for the migration,
      esp. if using distributed monitoring.

      In my opinion - gathered mostly from the number of posts in the forum - the quality of the DB-backends in decending order is:
      1. MySQL
      2. SQLite (for the new Proxy)
      3. PostgreSQL (after a large gap)
      4. Oracle


      Regards

      Norbert.

      Comment

      • Aly
        ZABBIX developer
        • May 2007
        • 1126

        #4
        Originally posted by tekknokrat
        Hi,
        as we have choosen postgresql db-backend and this is the tenth bug with postgresql query let zabbix-server dies I have to ask the developers:
        What scenarios are you testing zabbix with postgres and why does things like this happens:

        Code:
         10909:20090204:202827 [Z3005] Query failed: [0] PGRES_FATAL_ERROR:ERROR:  duplicate key value violates unique constraint "trends_uint_pkey"
         [insert into trends_uint (clock,itemid,num,value_min,value_avg,value_max) values (1233774000,22981,1,4150464512,4150464512,4150464512)]
         10909:20090204:202827 [Z3005] Query failed: [0] PGRES_FATAL_ERROR:ERROR:  current transaction is aborted, commands ignored until end of transaction block
         [update items set nextcheck=1233777496,prevvalue=lastvalue,prevorgvalue=NULL,lastvalue='4150464512',lastclock=1233775696 where itemid=22981]
         10909:20090204:202827 [Z3005] Query failed: [0] PGRES_FATAL_ERROR:ERROR:  current transaction is aborted, commands ignored until end of transaction block
         [select distinct function,parameter,itemid,lastvalue from functions where itemid=22981]
         10905:20090204:202828 One child process died. Exiting ...
         10905:20090204:202830 ZABBIX Server stopped. ZABBIX 1.6.2.
        I have since now seen no thread or bug-report were we ever get a feedback from one of the developers. Are you kidding or just ignoring things?
        I mean I have no problems to help and doing things suggested to me but with no little feedback from upstream - no chance

        Please give one little statement or at least the good advise:
        "choose another database backend, because postgresql-support is just experimental"
        By reporting those bugs you are helping a lot. We can not answer asap on all messages as we are busy with development, but when time comes we are looking for known and reported bugs and fix them.
        Zabbix | ex GUI developer

        Comment

        • nth
          Junior Member
          • Feb 2009
          • 13

          #5
          I just finished setting up a server (zabbix 1.6.2 + pg8.3.6) and as soon as I started it, I got the same error. So I did a:

          Code:
          create rule "delete_trend_duplicates" as on insert to "trends_uint" where (itemid = new.itemid) and (clock = new.clock) do instead delete from trends_uint where (itemid = new.itemid) and (clock = new.clock);
          so I could start the server before starting troubleshooting this. I'll do some digging when I have the time to see what more I can find out about this. I do realize this workaround loses me the trend data, and a different path could be keeping the old data:
          Code:
          create rule "ignore_trend_duplicates" as on insert to "trends_uint" where (itemid = new.itemid) and (clock = new.clock) do instead nothing;
          or the new one:
          Code:
          create rule "update_trend_duplicates" as on insert to "trends_uint" where (itemid = new.itemid) and (clock = new.clock) do instead update trends_uint set num=new.num, value_min=new.value_min, value_avg=new.value_avg, value_max=new.value_max;
          But I'd rather have no data than corrupted one until this gets fixed.

          Comment

          • hulting74
            Member
            • Nov 2008
            • 30

            #6
            Typo...?

            Hi

            Noticed that my Zabbix crashed with a similar error message:

            LOG: unexpected EOF on client connection
            ERROR: value too long for type character varying(255)
            ERROR: current transaction is aborted, commands ignored until end of transaction block

            My error was a simple type in the triggers

            Instead of typing in "logseverity" a used "last"...?!
            CORRECT: {Windows Logging:eventlog[Application].logseverity( 4 ) }=4
            WRONG: {Windows Logging:eventlog[Application].last( 4 ) }=4

            Rgs

            Stefan

            Comment

            • nth
              Junior Member
              • Feb 2009
              • 13

              #7
              I took a quick look at dbcache.c, where the insert is done. It first checks if the pair (itemid,timestamp) exists and executes the insert only if it doesn't, otherwise an update. I can only assume there's an issue with synchronization between threads and inbetween the check for the record and the actual insert it gets inserted by another thread. So I changed the rule to do an update (it's not pretty and I'm still not sure it's the right thing to do since I know very little about the inner workings of the whole server):
              Code:
              CREATE RULE update_trend_duplicates AS
                  ON INSERT TO trends_uint
                 WHERE (( SELECT count(*) AS count
                         FROM trends_uint
                        WHERE trends_uint.itemid = new.itemid AND trends_uint.clock = new.clock)) > 0 DO INSTEAD  UPDATE trends_uint SET value_min =
                      CASE
                          WHEN new.value_min > trends_uint.value_min THEN trends_uint.value_min
                          ELSE new.value_min
                      END, value_max =
                      CASE
                          WHEN new.value_max > trends_uint.value_max THEN new.value_max
                          ELSE trends_uint.value_max
                      END, value_avg = (trends_uint.num * trends_uint.value_avg + new.num * new.value_avg) / (trends_uint.num + new.num), num = trends_uint.num + new.num
                WHERE trends_uint.itemid = new.itemid AND trends_uint.clock = new.clock

              Comment

              • tekknokrat
                Senior Member
                • Sep 2008
                • 140

                #8
                Error still appears with 1:1.6.3.0pre20090312-0hardy1. Is there some update on that?

                10344:20090325:150142 [Z3005] Query failed: [0] PGRES_FATAL_ERROR:ERROR: duplicate key value violates unique constraint "trends_uint_pkey"
                [insert into trends_uint (clock,itemid,num,value_min,value_avg,value_max) values (1237989600,22532,1,4128448512,4128448512,41284485 12)]
                10344:20090325:150143 [Z3005] Query failed: [0] PGRES_FATAL_ERROR:ERROR: current transaction is aborted, commands ignored until end of transaction block
                [update items set nextcheck=1237989867,prevvalue=lastvalue,prevorgva lue=NULL,lastvalue='4128448512',lastclock=12379896 87 where itemid=22532]
                10344:20090325:150143 [Z3005] Query failed: [0] PGRES_FATAL_ERROR:ERROR: current transaction is aborted, commands ignored until end of transaction block
                [select distinct function,parameter,itemid,lastvalue from functions where itemid=22532]
                10336:20090325:150143 One child process died. Exiting ...
                10336:20090325:150145 Can't find shared memory for database cache. [Invalid argument]
                /usr/sbin/zabbix_server [30412]: Warning: ZABBIX semaphores already exist, trying to recreate.
                ...

                Comment

                • tekknokrat
                  Senior Member
                  • Sep 2008
                  • 140

                  #9
                  @nth
                  I had a look at your solution via creating a rule for the trends_uint table. As the data is not stored anyway when theres a duplicate key, imo the corruption are only minor cost when comparing to a server crash. Did you have it running stable with this rule? Do you also run dm setup with proxy?

                  Also @Devs
                  Any clue where this duplicate key issue happens?

                  Comment

                  • Aly
                    ZABBIX developer
                    • May 2007
                    • 1126

                    #10
                    "trends_uint_pkey" - this is postgres internal key. ZABBIX doesn't have such fields.
                    Zabbix | ex GUI developer

                    Comment

                    • tekknokrat
                      Senior Member
                      • Sep 2008
                      • 140

                      #11
                      Originally posted by Aly
                      "trends_uint_pkey" - this is postgres internal key. ZABBIX doesn't have such fields.
                      Well thanks for the answer. What I need to know is where this this is created and where zabbix takes use of it.Question is where does duplicate entries appear and how to get rid of of them. Any suggestions?

                      Comment

                      • nth
                        Junior Member
                        • Feb 2009
                        • 13

                        #12
                        "trends_uint_pkey" is the primary key for the table trends_uint. I've looked at the two inserts that cause the error, they're identical, so even the update solution was overdoing it. A plain do nothing is enough. Unfortunately, it would seem PG is not able to apply a rule to more than one query at a time so I would still get an occasional crash. I've updated to 1.6.3 today but I'm still getting the error, albeit not so often. I'm trying to figure out the sources atm, as on a first look it would seem there is actually no synchronization between worker threads but that certainly can't be so.

                        Comment

                        • slmeyer
                          Junior Member
                          • Aug 2008
                          • 10

                          #13
                          happens with 1.6.4 too

                          Code:
                           4447:20090529:152418 [Z3005] Query failed: [0] PGRES_FATAL_ERROR:ERROR:  duplicate key value violates unique constraint "trends_uint_pkey"
                           [insert into trends_uint (clock,itemid,num,value_min,value_avg,value_max) values (1243602000,22458,1,0,0,0)]
                            4447:20090529:152418 [Z3005] Query failed: [0] PGRES_FATAL_ERROR:ERROR:  current transaction is aborted, commands ignored until end of transaction block
                           [update items set nextcheck=1243606458,prevvalue=lastvalue,prevorgvalue=NULL,lastvalue='0',lastclock=1243603458 where itemid=22458]
                            4447:20090529:152418 [Z3005] Query failed: [0] PGRES_FATAL_ERROR:ERROR:  current transaction is aborted, commands ignored until end of transaction block
                           [select distinct function,parameter,itemid,lastvalue from functions where itemid=22458]

                          Comment

                          Working...