Ad Widget

Collapse

Table Partitioning

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • f.koch
    Member
    Zabbix Certified Specialist
    • Feb 2010
    • 85

    #16
    How does postgres compare in performance terms with MySQL? I'm looking at postgres for when we move from 1.8 to 2.0 but want to make sure the load levels are the same...

    Is the recommendation still to disable housekeeping with postgres (as it is with mysql) and then use sql maintenance jobs to keep the data in check? Also I'm guessing need to be careful with the postgres 'vacuuming' on the large database to make sure performance isn't impacted too much?
    Yes - advice from anyone with experience at high loads would be very welcome. The maintenance issue looks to be by far the biggest challenge. I'm torn between upgrading to V2.0.xx and moving to PerconaDB or Postgres or staying with V1.8.8 and sticking with MySQL and db partitioning. Some examples of optimum maintenance procedures with V2.0.xx would help greatly.
    Hi,

    i can not compare mysql and postgresql, i only run a postgresql database, but i can describe my setup.

    In our environment we have a 1,2 TB zabbix database running with postgresql 9.1 and zabbix 1.8.13 (>= 1.8.13 is important, there are some fixes for large databases and postgresql in tihs version), and plan to update to zabbix 2.0.3 and postgresql 9.2 next month, but without data migration (1.2 TB takes to long and is to expensive ).

    we have a standalone zabbix server + frontend and a database server (ibm37xx, 2 CPU 12 Core 36 GB RAM) and the DB Server is running with a load of ~1,5

    I have a partitioned database with this HowTo : http://www.zabbix.com/wiki/howto/db/postgres/partition

    i have the partitions for the current month on a local disk, and a tablespace for each month, and i migrate the partitions each month to the tablespace, i do this online, this works good if the partitions are not to big <= 5GB.

    My partitions are 20GB, this is the reason why i switch from montly to daily / montly partitioning with zabbix2 and this HowTo https://www.zabbix.org/wiki/Docs/how...l_partitioning, and beacuse i change the partition schema, the data migration is to expensive

    If the data for a month is no longer needed, you can disable partitioning for this table and easy drop the table , no need to run an expensive delete.

    You can also do this for any maintenance, you can disable the partitioning for a table, run some patch or something and bring it back online.

    vacuum is no problem at all, with version >=9, i use autovacuum.

    the zabbix housekeeper is completly disabled.

    the postgresql.conf is tuned a bit, but not so much, i can post the details if needed.

    i can post more data if you like..

    regards flo

    Comment

    • mcmyst
      Member
      • Feb 2012
      • 72

      #17
      Could someone tell me why this partitioning process is not directly integrated in the Zabbix SQL structure when we first set up the database ?

      I think this process should be maintained by the zabbix team, and not by people giving their own piece of code. Maybe the reason is that the zabbix team prefer to have people paying the support to have this accomplished by zabbix expert ?

      Everyone is not a database expert in here...

      Comment

      • tchjts1
        Senior Member
        • May 2008
        • 1605

        #18
        Originally posted by mcmyst
        Could someone tell me why this partitioning process is not directly integrated in the Zabbix SQL structure when we first set up the database ?

        Maybe the reason is that the zabbix team prefer to have people paying the support to have this accomplished by zabbix expert ?

        .
        Because partitioning is an option that not everyone wishes to employ. Has nothing to do with paying for Zabbix support, and doesn't require a "Zabbix expert" to accomplish.

        Zabbix 2.x uses foreign keys. It is a limitation of MySql that they don't support partitioning with foreign keys.

        Comment

        • kayjay
          Member
          • Jun 2010
          • 43

          #19
          postgresql.conf is tuned a bit, but not so much, i can post the details if needed.

          Yes - please post details, it really helps to complete the picture

          Comment

          • f.koch
            Member
            Zabbix Certified Specialist
            • Feb 2010
            • 85

            #20
            Originally posted by kayjay
            postgresql.conf is tuned a bit, but not so much, i can post the details if needed.

            Yes - please post details, it really helps to complete the picture

            Here are the tuned parameters

            max_connections = 400
            shared_buffers = 15360MB
            wal_buffers = 16MB
            checkpoint_segments = 32
            checkpoint_completion_target = 0.9
            effective_cache_size = 16GB

            Comment

            • Colttt
              Senior Member
              Zabbix Certified Specialist
              • Mar 2009
              • 878

              #21
              HERE is an other way to partition the tables..

              here is the thread
              Debian-User

              Sorry for my bad english

              Comment

              • jix
                Member
                • Feb 2011
                • 73

                #22
                !

                Hi
                Just Improve your mysql parameter
                Here is my Server and i haven't any performance problem
                Num of host = 205
                Num of items = 41400
                num of trigers = 41434
                nvps = 283

                database is mysql 5.1 on IBM 3650 with 32G mem + replication

                hostkeepers = disabled
                no table partition

                items keeps in history for 120 days

                cools yeah?

                Comment

                • mcmyst
                  Member
                  • Feb 2012
                  • 72

                  #23
                  Originally posted by jix
                  Hi
                  Just Improve your mysql parameter
                  Here is my Server and i haven't any performance problem
                  Num of host = 205
                  Num of items = 41400
                  num of trigers = 41434
                  nvps = 283

                  database is mysql 5.1 on IBM 3650 with 32G mem + replication

                  hostkeepers = disabled
                  no table partition

                  items keeps in history for 120 days

                  cools yeah?
                  How are you deleting your old items value ?
                  What is the size of your trend/history tables ?

                  Comment

                  • mushero
                    Senior Member
                    • May 2010
                    • 101

                    #24
                    We run much higher new values at 330/sec with no problems at all, quite low load, actually, but on decent hardware and shared with front end, lots of users. We expect to hit 500 easily and 1000 with some upgrades/split. And the whole dataset is not in RAM.

                    We keep our housekeeper running tightly and while we have custom scripts we can use, we find the default is fine on 1.8.3 for 100M data items, etc.

                    We are experts at MySQL tuning for these types of loads and have both audit scripts and best practice configs we'd be happy to share.

                    Send me private message and I can send scripts; you can also send me your my.cnf and output from SHOW GLOBAL STATUS.

                    Some general advice of course is use INNODB and move to MySQL Percona version 5.5 just as basic upgrades to help a lot. Some details on your OS, HW also helpful, especially RAM and disk controller.

                    Comment

                    • kayjay
                      Member
                      • Jun 2010
                      • 43

                      #25
                      I would be interested to know how you deal with the constantly growing database size. Do you periodically shut down the SQL server to free up space? I'm interested to hear that you recommend Persona but you stall have the same problem with INNODB growing huge over time. How do you solve this issue?

                      Thanks

                      Comment

                      • mushero
                        Senior Member
                        • May 2010
                        • 101

                        #26
                        We don't worry about it and let the housekeeper run, though we may have increased its frequency, holding our DB to about 25-30GB with 30 days data in most cases, but we have 200-300 items per server and 1,000+ servers.

                        Key is enough RAM and fast disks, though.

                        You should be on InnoDB, period, as much faster, especially in newer versions, and less locking - your housekeeper will kill you with locks if it's MyISAM, I'd think.

                        Oh, and use File Per Table to help control size.

                        Originally posted by kayjay
                        I would be interested to know how you deal with the constantly growing database size. Do you periodically shut down the SQL server to free up space? I'm interested to hear that you recommend Persona but you stall have the same problem with INNODB growing huge over time. How do you solve this issue?

                        Thanks

                        Comment

                        • kayjay
                          Member
                          • Jun 2010
                          • 43

                          #27
                          It's difficult not to worry as this was the the reason I started this thread originally. With houskeeper running and history severely restricted the INNODB size just continues to rise over time (months) until at some point (150GB+) houskeeper runs continuously and the server becomes effectively useless. This is because deleted records are not really deleted and big db's run slow. To fix this requires the db to go offline for several hours and tables truncated and recreated. Given this, how do you manage this issue as I can see no way to ignore it?

                          I have already applied all the well documented tuning measures and run on some serous hardware. All these measures just server to move the point at which this will inevitably occur.

                          Thanks

                          Comment

                          • Jason
                            Senior Member
                            • Nov 2007
                            • 430

                            #28
                            It is for this very reason we moved to Postgres at the same time as we moved to zabbix 2.0.X As version 2 is not compatible with table partitioning in mysql then we just moved to postgres...

                            We've sorted our table rotation and then disabled housekeeping... When it gets too old we just drop the relevant partitions...

                            Comment

                            • mushero
                              Senior Member
                              • May 2010
                              • 101

                              #29
                              Not sure I agree that they are not really deleted and big DBs run slow - there is no basis for this. Yes, they take up space on disk, but that's re-used for the next insert and even if not, it's never read from disk or in RAM so has no effect on anything. You can have 500GB of empty space, no difference other than some innodb maps.

                              But as those sizes, yes, the housekeeper can take a lot of time - I have a purge SQL you can run yourself if you want, which would be faster than the housekeeper but more intense.

                              Glad you are on serious hardware - I assume this most importantly includes battery-backed RAID and lots of RAM, in your case 32-64GB would be nice.

                              Of course, you could always take less data - we work hard to lengthen monitoring times and reduce retention.

                              Now a multi-threaded housekeeper would be nice.

                              Originally posted by kayjay
                              It's difficult not to worry as this was the the reason I started this thread originally. With houskeeper running and history severely restricted the INNODB size just continues to rise over time (months) until at some point (150GB+) houskeeper runs continuously and the server becomes effectively useless. This is because deleted records are not really deleted and big db's run slow. To fix this requires the db to go offline for several hours and tables truncated and recreated. Given this, how do you manage this issue as I can see no way to ignore it?

                              I have already applied all the well documented tuning measures and run on some serous hardware. All these measures just server to move the point at which this will inevitably occur.

                              Thanks

                              Comment

                              • kayjay
                                Member
                                • Jun 2010
                                • 43

                                #30
                                Yes I would be very interested to see your purge SQL script. Maybe it's just inefficient housekeeping

                                Jason - How have you found performance with Postgres and Zabbix V2.0.X compared with older 1.8.X? Did you consider PersonDB instead? How did you sort out table rotation?

                                Thanks guys

                                Comment

                                Working...