Ad Widget
Collapse
Zabbix Postgresql 9.? autopartitioning
Collapse
X
-
Jason and alledm, thanks for answers.
Unfortunately, I don't have enough knowledge on working with databases therefore I don't know how to correctly implement even this simple action:
re-inserting the lines and then deleting the old rows from the old tableComment
-
I think you'd need to do some reading on databases then as there will come a point where you need to look at or change something in the database.
As a starting point look at http://www.postgresql.org/docs/8.1/s...electinto.html for information on select into.
The simplest option would be to start partitioning from now and then at a later date delete all the data from the main table using a delete command with a date filter although this command could take a long while to run with a large table.Comment
-
Then I would suggest you don't do it. Best if you wait 6 months and you prune it then from the main table. Meanwhile you have 6 months to learn some basic SQL, which will come very useful if you like to hack around like you are doing now
Comment
-
So, I have tested some approaches to DB)
12/7/2013 - is the date of the 1st partitions creation in my test environment.
1) It deleted all history before 12/6/2013:
2) Next approach deleted all history before 12/7/2013 - now delete_partitions function works although it didn't delete statistics prior to 12/7/2013 before dump-restore table procedureCode:DELETE FROM history_uint where clock<1386288000;
So, in the next step I'll download some book on databases subject)Code:pg_dump --data-only --table=history_uint zabbix> history_uint.pg psql zabbix< history_uint.pg SELECT delete_partitions('3 days','day');Last edited by Andrey123q; 09-12-2013, 17:51.Comment
-
-
Hi, I have another question regarding the growth of Zabbix DB after implementing autopartitioning.
Now we have about 6000 items and 120 hosts. Also in our production environment we need to preserve statistics on duration about 2-3 years.
After implementing autopartitioning and disabling a houskeeper all works good, but daily growth of DB is about 500 MB now, and ultimately yearly growth is about 200 GB.
So, I wonder, does exist any solution to archive some data from zabbix DB or split it on active and archive parts, and so on?Comment
-
There's no need to split anything. The query planner is usually smart enough to only consider the partitions it requires. There may be queries that don't benefit from this. I'm thinking about the query responsible for the time slider scaling. It is sort of pointless though and should possibly be replaced.
If you run out of storage, you can use PG's tablespaces to conveniently put older partitions to slower media.Comment
-
In case, when houskeeper is disabled and in the future zabbix DB will grow about 200 GB per year (and it will grow to 1 TB in 2-3 years), will it make any difficulties in postgres health and performance or not?Comment
-
One issue I've found with auto-partitioning (and that method for dropping tables) is that disk space doesn't seem to be released efficiently without running a vacuum full which pretty much knocks zabbix offline whilst it's running...
I'm guessing this can be avoided by tweaking the auto vacuum settings. Has anyone managed this?Comment
-
I also addedd partitions to history_str but it doesn't correct quick growth of DB (500 MB per day). Furthermore, as I said earlier, I don't have opportunity to truncate old partitions very often because we need statistics for at least 2 last years.
So, I don't know what I have to do with this issue.Comment
Comment