Ad Widget

Collapse

Zabbix Postgresql 9.? autopartitioning

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • alledm
    Member
    • May 2012
    • 84

    #61
    Yes you can do that that's exactly what I was suggesting..

    Originally posted by Jason
    I'd have thought could have used select into though to move data from main table into partition tables and then purged the data from main table.

    Comment

    • Jason
      Senior Member
      • Nov 2007
      • 430

      #62
      Just spelling out for him

      Comment

      • Andrey123q
        Junior Member
        • Aug 2011
        • 19

        #63
        Jason and alledm, thanks for answers.
        Unfortunately, I don't have enough knowledge on working with databases therefore I don't know how to correctly implement even this simple action:
        re-inserting the lines and then deleting the old rows from the old table

        Comment

        • Jason
          Senior Member
          • Nov 2007
          • 430

          #64
          Originally posted by Andrey123q
          Jason and alledm, thanks for answers.
          Unfortunately, I don't have enough knowledge on working with databases therefore I don't know how to correctly implement even this simple action:
          I think you'd need to do some reading on databases then as there will come a point where you need to look at or change something in the database.

          As a starting point look at http://www.postgresql.org/docs/8.1/s...electinto.html for information on select into.

          The simplest option would be to start partitioning from now and then at a later date delete all the data from the main table using a delete command with a date filter although this command could take a long while to run with a large table.

          Comment

          • alledm
            Member
            • May 2012
            • 84

            #65
            Then I would suggest you don't do it. Best if you wait 6 months and you prune it then from the main table. Meanwhile you have 6 months to learn some basic SQL, which will come very useful if you like to hack around like you are doing now

            Originally posted by Andrey123q
            Jason and alledm, thanks for answers.
            Unfortunately, I don't have enough knowledge on working with databases therefore I don't know how to correctly implement even this simple action:

            Comment

            • Andrey123q
              Junior Member
              • Aug 2011
              • 19

              #66
              So, I have tested some approaches to DB)
              12/7/2013 - is the date of the 1st partitions creation in my test environment.

              1) It deleted all history before 12/6/2013:
              Code:
              DELETE FROM history_uint where clock<1386288000;
              2) Next approach deleted all history before 12/7/2013 - now delete_partitions function works although it didn't delete statistics prior to 12/7/2013 before dump-restore table procedure
              Code:
              pg_dump --data-only --table=history_uint zabbix> history_uint.pg
              psql zabbix< history_uint.pg
              SELECT delete_partitions('3 days','day');
              So, in the next step I'll download some book on databases subject)
              Last edited by Andrey123q; 09-12-2013, 17:51.

              Comment

              • Jason
                Senior Member
                • Nov 2007
                • 430

                #67
                Also before trying anything then make sure you've got a good dump of the database first. Ideally restore this to a test server so can try out anything on this first

                Comment

                • Andrey123q
                  Junior Member
                  • Aug 2011
                  • 19

                  #68
                  Jason, yes of course, thanks for advise.

                  Comment

                  • Jason
                    Senior Member
                    • Nov 2007
                    • 430

                    #69
                    If you've got the partitions working then just need to schedule the cron job to drop the old ones and make sure you're retaining enough data. I think there is some information about postgres settings around to optimise performance.

                    Comment

                    • Andrey123q
                      Junior Member
                      • Aug 2011
                      • 19

                      #70
                      Hi, I have another question regarding the growth of Zabbix DB after implementing autopartitioning.
                      Now we have about 6000 items and 120 hosts. Also in our production environment we need to preserve statistics on duration about 2-3 years.
                      After implementing autopartitioning and disabling a houskeeper all works good, but daily growth of DB is about 500 MB now, and ultimately yearly growth is about 200 GB.
                      So, I wonder, does exist any solution to archive some data from zabbix DB or split it on active and archive parts, and so on?

                      Comment

                      • volter
                        Member
                        Zabbix Certified Specialist
                        • Dec 2011
                        • 85

                        #71
                        There's no need to split anything. The query planner is usually smart enough to only consider the partitions it requires. There may be queries that don't benefit from this. I'm thinking about the query responsible for the time slider scaling. It is sort of pointless though and should possibly be replaced.

                        If you run out of storage, you can use PG's tablespaces to conveniently put older partitions to slower media.

                        Comment

                        • Andrey123q
                          Junior Member
                          • Aug 2011
                          • 19

                          #72
                          In case, when houskeeper is disabled and in the future zabbix DB will grow about 200 GB per year (and it will grow to 1 TB in 2-3 years), will it make any difficulties in postgres health and performance or not?

                          Comment

                          • Jason
                            Senior Member
                            • Nov 2007
                            • 430

                            #73
                            One issue I've found with auto-partitioning (and that method for dropping tables) is that disk space doesn't seem to be released efficiently without running a vacuum full which pretty much knocks zabbix offline whilst it's running...

                            I'm guessing this can be avoided by tweaking the auto vacuum settings. Has anyone managed this?

                            Comment

                            • Jason
                              Senior Member
                              • Nov 2007
                              • 430

                              #74
                              If you gather a lot of text data then it's also worth adding a partition to history_str as well

                              Just found that table in my DB was nearly 8GB!

                              Comment

                              • Andrey123q
                                Junior Member
                                • Aug 2011
                                • 19

                                #75
                                I also addedd partitions to history_str but it doesn't correct quick growth of DB (500 MB per day). Furthermore, as I said earlier, I don't have opportunity to truncate old partitions very often because we need statistics for at least 2 last years.
                                So, I don't know what I have to do with this issue.

                                Comment

                                Working...