My system environment is:
Rocky Linux 9.4 on a VMware VM
PHP 8 + Nginx 1.20.1
Postgres 14 (non-timescaleDB)
I have these error spamming my server log repeatedly.
It affects two indexes as far as I can tell:
history_uint_pkey
history_pkey
These are indexes for the history_uint and history tables respectively. I have tried various things on the DB side to check for duplication in the tables, reindexing the tables, resetting the sequence to MAX(itemid) and recreating the tables and reinserting the data from scratch however the errors persist. Instead I think what is happening is Zabbix is trying to insert duplicated data into the DB for some reason. Does anyone know why this would happen? Could it be a bug?
Rocky Linux 9.4 on a VMware VM
PHP 8 + Nginx 1.20.1
Postgres 14 (non-timescaleDB)
I have these error spamming my server log repeatedly.
Code:
4390:20250131:150618.955 received id:1741599 is less than last id:1741758 4334:20250131:150618.958 [Z3008] query failed due to primary key constraint: [0] PGRES_FATAL_ERROR:ERROR: duplicate key value violates unique constraint "history_pkey" DETAIL: Key (itemid, clock, ns)=(381496, 1738335976, 59543500) already exists. 4334:20250131:150618.960 skipped 8 duplicates
Code:
4393:20250131:150739.515 received id:1617250 is less than last id:1617410 4334:20250131:150739.517 [Z3008] query failed due to primary key constraint: [0] PGRES_FATAL_ERROR:ERROR: duplicate key value violates unique constraint "history_uint_pkey" DETAIL: Key (itemid, clock, ns)=(374555, 1738336055, 118531500) already exists. 4334:20250131:150739.518 skipped 2 duplicates
It affects two indexes as far as I can tell:
history_uint_pkey
history_pkey
These are indexes for the history_uint and history tables respectively. I have tried various things on the DB side to check for duplication in the tables, reindexing the tables, resetting the sequence to MAX(itemid) and recreating the tables and reinserting the data from scratch however the errors persist. Instead I think what is happening is Zabbix is trying to insert duplicated data into the DB for some reason. Does anyone know why this would happen? Could it be a bug?
Comment