A few days ago, I found that one of my triggers was triggered on time in the early hours of every morning, and I found that after my database partitioning task was executed in the early morning, the value of Zabbix's history syncer processes reached 100% for 2 minutes, resulting in the value of the monitoring item being unable to be written to the database within these two minutes, causing a trigger containing the "nodata" function to be triggered.
My zabbix database is more than 300 GB in size, the history table is more than 100 GB, and the history_uint table is more than 100 GB. It takes more than a minute when they perform the "ALTER TABLE zabbix.* ADD PARTITION" operation, and it takes more than 3 minutes in total. I don't know how to speed up their execution.
I also have a question as to why the trigger is judged instead of using the front-end cache, but only after it has been stored in the database.
Should I give up using database partitions so that at least Zabbix doesn't generate false alarms?

My zabbix database is more than 300 GB in size, the history table is more than 100 GB, and the history_uint table is more than 100 GB. It takes more than a minute when they perform the "ALTER TABLE zabbix.* ADD PARTITION" operation, and it takes more than 3 minutes in total. I don't know how to speed up their execution.
I also have a question as to why the trigger is judged instead of using the front-end cache, but only after it has been stored in the database.
Should I give up using database partitions so that at least Zabbix doesn't generate false alarms?