Ad Widget

Collapse

Dublicate entry for key PRIMARY

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • zbxadmin
    Junior Member
    • Jun 2022
    • 4

    #1

    Dublicate entry for key PRIMARY

    I have problem with duplicate entry for key 'PRIMARY' after upgrading Zabbix 5.0 > 6.0.6.
    First DB upgrade went well and there was no problems (zbx1.txt) but after I run manualy database upgrade for primary keys (zbx2.txt) I got lots of query failed errors with duplicates (zbx3.txt).
    Here is one example in zabbix_server.log:
    17170:20220629:130620.865 [Z3008] query failed due to primary key constraint: [1062] Duplicate entry '29095-1656507978-1' for key 'PRIMARY'
    17170:20220629:130620.876 skipped 48 duplicates

    I have allready tried different fixes, deleted history tables, recreate, partitioning etc but no luck, maybe someone have some idea what is wrong and what should I do for fix this error.
    Or do I certainly need primary keys for history tables?

    My system is
    Ubuntu 20.04
    Php 7.4.3
    MariaDB 10.6.8



    Attached Files
  • Markku
    Senior Member
    Zabbix Certified SpecialistZabbix Certified ProfessionalZabbix Certified Expert
    • Sep 2018
    • 1781

    #2
    Skipped records mean records that have the identical fields itemid, clock and ns (see "PRIMARY KEY (itemid,clock,ns)" in the sql script). Basically you'll have these whenever data has been saved to the Zabbix database with ns (nanoseconds) of zero. I had this when my custom app had saved data as Zabbix trapper items and only used the clock field (seconds) and it sent more than one value per second.

    If you can live without those skipped records, you are fine. (That was my case, I was happy with only one value saved per second. Then I fixed the app to use nanoseconds as well, for future purposes.)

    Otherwise, you need to first fix your data in the old database by adding non-unique ns values for all those records that have identical itemid and clock (and ns is zero).

    Markku

    Comment

    • Rohllik28
      Junior Member
      • May 2024
      • 9

      #3
      With your permission, I am resurrecting this old topic.

      I have a similar issue. My own tool feeds the Zabbix trapper, and I see duplicates in the log. Based on this topic, I modified the tool to send data more precisely with nanoseconds, but duplicates still occur. The probability that the records are really hitting the same timestamp is a few bambillions to one.

      The values I marked on the screenshot should be nanoseconds which should be correct for the database, right? So it seems that Zabbix/trapper does not recognize the nanosecond timestamp(?)
      .
      What could be the potential problem, and what solutions could I try?

      Click image for larger version

Name:	image.png
Views:	1002
Size:	120.0 KB
ID:	487398
      Attached Files
      Last edited by Rohllik28; 12-07-2024, 17:19.

      Comment

      • Markku
        Senior Member
        Zabbix Certified SpecialistZabbix Certified ProfessionalZabbix Certified Expert
        • Sep 2018
        • 1781

        #4
        I'm now testing with Zabbix server+proxy+sender 7.0.0, and it seems to save the nanoseconds correctly:

        $ zabbix_sender -z 192.168.7.82 -s Zabbix70-agent -k actiontest -o 0
        Response from "192.168.7.82:10051": "processed: 1; failed: 0; total: 1; seconds spent: 0.000065"
        sent: 1; skipped: 0; total: 1


        Code:
        MariaDB [zabbix]> select itemid from items where hostid = 10582 and key_ = "actiontest";
        +--------+
        | itemid |
        +--------+
        |  46004 |
        +--------+
        
        ​MariaDB [zabbix]> select * from history_uint where itemid=46004 order by clock desc limit 1;
        +--------+------------+-------+-----------+
        | itemid | clock      | value | ns        |
        +--------+------------+-------+-----------+
        |  46004 | 1720798871 |     0 | 761968449 |
        +--------+------------+-------+-----------+
        Are you sure that your own tool uses the correct format to send the nanoseconds? That's the obvious first question.

        ​Markku

        Comment

        Working...