Ad Widget

Collapse

[Z3008] query failed due to primary key constraint:

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • fs.geoffk
    Junior Member
    • Feb 2023
    • 8

    #1

    [Z3008] query failed due to primary key constraint:

    My system environment is:

    Rocky Linux 9.4 on a VMware VM
    PHP 8 + Nginx 1.20.1
    Postgres 14 (non-timescaleDB)

    I have these error spamming my server log repeatedly.

    Code:
    4390:20250131:150618.955 received id:1741599 is less than last id:1741758
    4334:20250131:150618.958 [Z3008] query failed due to primary key constraint: [0] PGRES_FATAL_ERROR:ERROR: duplicate key value violates unique constraint "history_pkey"
    DETAIL: Key (itemid, clock, ns)=(381496, 1738335976, 59543500) already exists.
    4334:20250131:150618.960 skipped 8 duplicates
    Code:
    4393:20250131:150739.515 received id:1617250 is less than last id:1617410
    4334:20250131:150739.517 [Z3008] query failed due to primary key constraint: [0] PGRES_FATAL_ERROR:ERROR: duplicate key value violates unique constraint "history_uint_pkey"
    DETAIL: Key (itemid, clock, ns)=(374555, 1738336055, 118531500) already exists.
    4334:20250131:150739.518 skipped 2 duplicates


    It affects two indexes as far as I can tell:

    history_uint_pkey
    history_pkey

    These are indexes for the history_uint and history tables respectively. I have tried various things on the DB side to check for duplication in the tables, reindexing the tables, resetting the sequence to MAX(itemid) and recreating the tables and reinserting the data from scratch however the errors persist. Instead I think what is happening is Zabbix is trying to insert duplicated data into the DB for some reason. Does anyone know why this would happen? Could it be a bug?
  • Hamardaban
    Senior Member
    Zabbix Certified SpecialistZabbix Certified Professional
    • May 2019
    • 2713

    #2
    Most likely, the error is related to the fact that the time on the agents or proxy is not synchronized with the zabbix server.
    The data arrives with a time later than the one already recorded and cannot be inserted into the table.
    Or on the contrary, some node of the network “lives” in the future and shits everyone else.

    Comment

    • fs.geoffk
      Junior Member
      • Feb 2023
      • 8

      #3
      Ok, everything on the network is supposed to be using the same NTP servers so they should all be in sync. That isn't aways the case though.

      Is there a way to identify which host has an incorrect clock?

      Comment

      • Hamardaban
        Senior Member
        Zabbix Certified SpecialistZabbix Certified Professional
        • May 2019
        • 2713

        #4
        Yes - there is a way.
        You have itemid (from error message) - go to edit page of any item and replace the itemid number in browser url.

        Comment

        • fs.geoffk
          Junior Member
          • Feb 2023
          • 8

          #5
          I have solved this problem. Initially I found a Linux VM with the wrong time zone set on it and thought I had fixed the issue then and there. However there were still errors in the logs. I used the itemid from the errors to change the url for an item graph to point the itemid in the log then I could read the graph label which includes the hostname. These were several Windows Server VMs. They all turned out to be Domain Controllers that don't get rebooted very often and were running an old version of the Zabbix client. Once I upgraded the clients on these Windows VMs the remaining errors stopped.

          Comment

          Working...