Ad Widget

Collapse

Table Partitioning on Zabbix 2.2

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • kevinw
    Junior Member
    • Oct 2013
    • 5

    #16
    Originally posted by nms_user
    With the upgrade to 2.0, I had to adjust the script (thus the stored procs and the database itself). No big change, just removing partitioning from the tables affected by foreign keys (acknowledges, alerts, auditlog, events, service_alarms). Look at the replies of the above mentioned howto.

    I actually don't know about db schema changes in 2.2 which would prohibit the usage of partitioning like in 2.0.

    To all the readers: Are you aware of a showstopper?
    No showstopper. We are using a 3-node MySQL Galera cluster with 2.2 and are running >5100 NVPS. We are currently doing significant partitioning on the bigger history, history_uint, trend, trend_uint tables - the rest aren't big enough to warrant it.

    I'm tempted to partition them all anyway if it allows me to disable that part of the HK altogether. Also, would really like to partition alerts and events tables - but I think that involves removing the FKs and I'm not that daring yet.

    I was also looking at XtraDB since Percona was supposed to be better performance than community MySQL - but comments don't seem to reflect that so will have to wait until I can find time to do more testing.

    Comment

    • yuusou
      Junior Member
      • Feb 2014
      • 24

      #17
      Don't get me wrong, Percona XtraDB performance is top notch. Until you turn HK on trends and history. That destroys performance completely.

      Comment

      • nms_user
        Member
        • Feb 2009
        • 43

        #18
        @kevinw:
        OK, thanks for the answer - so we can do the upgrade without headache :-)

        @yuusou:
        Yes, WE know that housekeeper destroys the performance of ANY db system (regardless of if it's MySql native, Percona or Postgre)... That's the cause why we are gone to partitioning. You never get housekeeper as "silent" as you would like it to have when it's running.

        The only Queue-peak the whole day happens when partitioning does it's job. And it's done in 1 minute - no housekeeping running hours and hours giving you plenty-/painful false alarms because the db/disk subsystem is absolutely overloaded and zabbix server can't insert new values because of timeouts.

        Housekeeping is ok if you have a small db / less NVPS or a very capable SSD-disk-backend.
        Last edited by nms_user; 13-03-2014, 12:08.

        Comment

        • kevinw
          Junior Member
          • Oct 2013
          • 5

          #19
          Originally posted by nms_user
          @kevinw:
          OK, thanks for the answer - so we can do the upgrade without headache :-)
          There's always headaches...

          We have HK every hour with MaxHousekeeperDelete=10000. It runs about 15 mins every hour and we do see some spikes in delete activity, but nothing that is hurting our normal activity.

          Given the granularity of HK in 2.2, has anyone determined which of the various categories of HK are killers? Even if I fully partition all History and Trends tables and turn those HK options off, I still need it to do Events, Alerts, and Audit. Hard to tell which of the HK activities is the high runner...

          Comment

          • foboss
            Junior Member
            • Nov 2013
            • 7

            #20
            Hi!

            Is there any mysql partitioning manual for 2.2? Everything I've googled is 1.8+ or 2.0 articles or forum posts.

            I'm a newbie in partitioning and do not want to do smth wrong.

            Comment

            • Navern
              Member
              • May 2013
              • 33

              #21
              Originally posted by foboss
              Hi!

              Is there any mysql partitioning manual for 2.2? Everything I've googled is 1.8+ or 2.0 articles or forum posts.

              I'm a newbie in partitioning and do not want to do smth wrong.
              Same question. Any answer would be good

              Comment

              • foboss
                Junior Member
                • Nov 2013
                • 7

                #22
                Hi again!

                Nobody answered, so I've found working manual for 2.2 on my own:
                Join the friendly and open Zabbix community on our forums and social media platforms.

                Comment

                • vesal
                  Junior Member
                  • Dec 2013
                  • 13

                  #23
                  Do I need to have fresh and clean database in order to create table partition or can I implement it to current running one?

                  Comment

                  • Navern
                    Member
                    • May 2013
                    • 33

                    #24
                    Originally posted by vesal
                    Do I need to have fresh and clean database in order to create table partition or can I implement it to current running one?
                    As far as i know you can perform it on running database, but if you have a large one then it could take a while. And database will be in 'read-lock' state, so you can't get new values from zabbix-server.

                    Comment

                    • Navern
                      Member
                      • May 2013
                      • 33

                      #25
                      Originally posted by foboss
                      Hi again!

                      Nobody answered, so I've found working manual for 2.2 on my own:
                      http://zabbix.org/wiki/Docs/howto/mysql_partition
                      Thank you, that topic was very helpful. I've implemented partitioning. I've used interval 720 hours(approximately 1 month) for trends table.

                      How do you think would it be good enough?

                      I also written up an bash script to place it into crontab and can share it if someone needs this.

                      Comment

                      • foboss
                        Junior Member
                        • Nov 2013
                        • 7

                        #26
                        I keep trends for 365 days. That tables are not very big.

                        And I use MySQL event sceduler to call partition_maintenance instead of Bash:
                        Code:
                        DELIMITER $$
                        
                        CREATE EVENT IF NOT EXISTS `e_part_manage`
                            ON SCHEDULE EVERY 1 DAY
                            STARTS '2014-01-01 04:00:00'
                            ON COMPLETION PRESERVE
                            ENABLE
                            COMMENT 'Zabbix partitioning'
                            DO BEGIN
                                CALL partition_maintenance(14, 365, 14);
                            END$$
                        
                        DELIMITER ;

                        Comment

                        • Alex_SYB
                          Senior Member
                          • Feb 2012
                          • 133

                          #27
                          Old thread, still relevant i think

                          Does this still hold true for 2.4
                          as well as
                          Join the friendly and open Zabbix community on our forums and social media platforms.


                          That last script was it creating 14 days in the future every day ?


                          CALL partition_maintenance(14, 365, 14);
                          or does it just make sure there are 14 future partitions available ?

                          Comment

                          • kloczek
                            Senior Member
                            • Jun 2006
                            • 1771

                            #28
                            Originally posted by yuusou
                            I can make a script myself, no problem (and share it with whoever wants it). I'm more worried about all the foreign keys.
                            There is no foreign keys on history* and trends* tables.
                            I've partitioned history* and trends* tables when I've been using zabbix 1.8.x using as template solution described on http://zabbixzone.com/zabbix/partitioning-tables/

                            After apply something like above (just ignore anything else than partitioning history*/trends* tables) whole housekeeping process is maintained using single crontab entry.
                            My /etc/cron.d/zabbix-mysql-master-housekeeping file from host with master database:
                            Code:
                             0 * * *               root    mysql -u<user> -p<pwd> zabbix -e "CALL zabbix.create_zabbix_partitions; create_next_monthly_partitions('zabbix', 'trends'); call create_next_monthly_partitions('zabbix', 'trends_uint');" 2>&1 >/dev/null
                            After partitioning history* and trends* tables I've successfully migrated to 2.0 and after this to 2.2.
                            I've been doing whole partitioning using slave DB. When tables where partitioned after syncing with master all what was necessary to do was just switch database backend to slave one promoting this instance as master.

                            Using the same approach (with slave) I've been able to minimize zabbix downtime to absolute minimum on upgrade process but whole set of changes needs to be carefully divided to two parts. One containing only add new fields and apply all changes which cannot affect syncing the data from master and separate second part of the changes which must be done with stopped synchronization from master.
                            This approach allows minimize zabbix downtime and minimize gaps in monitoring data on making final stages of upgrades.

                            Nevertheless in my case above approach was necessary because OOTB upgrade procedure with our size of database would took about two days. After splitting all necessary modifications was possible minimize downtime on 1.8-->2.0 upgrade to about 20 min and on 2.0->2.2 upgrade to about 3-4 min (2.0->2.2 upgrade is way simpler and less complicated).
                            Without such splitted procedure estimated upgrade time was about 5h.

                            Really on testing upgrade is very useful to do all changes first on slave and attach (partially upgraded database if you are going to use such split upgrade process) to upgraded zabbix web frontend to confirm that whole upgrade process can be done without loosing data.
                            Last edited by kloczek; 23-09-2014, 20:16.
                            http://uk.linkedin.com/pub/tomasz-k%...zko/6/940/430/
                            https://kloczek.wordpress.com/
                            zapish - Zabbix API SHell binding https://github.com/kloczek/zapish
                            My zabbix templates https://github.com/kloczek/zabbix-templates

                            Comment

                            • kloczek
                              Senior Member
                              • Jun 2006
                              • 1771

                              #29
                              Originally posted by foboss
                              I
                              Code:
                              DELIMITER $$
                              
                              CREATE EVENT IF NOT EXISTS `e_part_manage`
                                  ON SCHEDULE EVERY 1 DAY
                                  STARTS '2014-01-01 04:00:00'
                                  ON COMPLETION PRESERVE
                                  ENABLE
                                  COMMENT 'Zabbix partitioning'
                                  DO BEGIN
                                      CALL partition_maintenance(14, 365, 14);
                                  END$$
                              
                              DELIMITER ;
                              Hmm .. seems you solution with create event is better than mine with calling procedures using cron
                              http://uk.linkedin.com/pub/tomasz-k%...zko/6/940/430/
                              https://kloczek.wordpress.com/
                              zapish - Zabbix API SHell binding https://github.com/kloczek/zapish
                              My zabbix templates https://github.com/kloczek/zabbix-templates

                              Comment

                              • Crypty
                                Member
                                • Jul 2012
                                • 80

                                #30
                                Hi,

                                I'm running Zabbix 2.4.0 and I'm searching stuff about decreasing database size.

                                The sizes of my "history/trends" tables:

                                12K history.frm
                                16G history.ibd
                                12K history_log.frm
                                128K history_log.ibd
                                12K history_str.frm
                                21M history_str.ibd
                                12K history_text.frm
                                3.1G history_text.ibd
                                12K history_uint.frm
                                11G history_uint.ibd
                                ...
                                ...
                                12K trends.frm
                                273M trends.ibd
                                12K trends_uint.frm
                                845M trends_uint.ibd
                                ...
                                ...

                                ---

                                Now I have a simple question - IF I had just one host with 1 Item which I update every hour and I want to keep 1 week of history and 1 month of trends... Can I say that without any partitioning, just with housekeeping, I'm able to have the DB size the same after one month? E.g. the DB size would be 100 MB after one month - Can I say that after every hour, the housekeeper deletes too old data and they can be replaced by the newest ones? So after 1 year, the database size would be still 100 MB? (if I don't count various events, ... which can alter) - my history table would have 100 MB "forever". Thanks!

                                If yes, can I say the same if I have tens of hosts with hundreds of Items of various history/trend intervals that if the maximum trend storage interval is 2 years and the history storage is 1 month... Then that after those 2 years, I will be replacing old data so the database size would never grow up again? E.g. I'll have 200 GB of data after 2 years and if I don't change the parameters, I will always have those 200 GB in the future...

                                Or to get this, I need to configure partitioning? And if I have various intervals for history/trends based on various hosts and items, can I use partitioning? I can see that it allows storing the history e.g. for 28 days and trends for 2 years... But it's the same for ALL Items... I would like to have e.g. 90 days / 1 year for something and for something else just 7 days' history and 30 days' trends... Can it be done?

                                Thanks a lot!

                                There is my.cnf:
                                Code:
                                [client]
                                port            = 3306
                                socket          = /var/run/mysqld/mysqld.sock
                                
                                [mysqld_safe]
                                socket          = /var/run/mysqld/mysqld.sock
                                nice            = 0
                                
                                [mysqld]
                                user            = mysql
                                pid-file        = /var/run/mysqld/mysqld.pid
                                socket          = /var/run/mysqld/mysqld.sock
                                port            = 3306
                                basedir         = /usr
                                datadir         = /var/lib/mysql
                                tmpdir          = /tmp
                                lc-messages-dir = /usr/share/mysql
                                skip-external-locking
                                
                                bind-address            = 127.0.0.1
                                
                                key_buffer              = 16M
                                max_allowed_packet      = 16M
                                thread_stack            = 192K
                                thread_cache_size       = 8
                                
                                myisam-recover          = BACKUP
                                
                                query_cache_limit       = 1M
                                query_cache_size        = 16M
                                
                                expire_logs_days        = 10
                                max_binlog_size         = 100M
                                
                                innodb_data_home_dir = /var/lib/mysql
                                innodb_data_file_path = ibdata1:10M:autoextend:max:4096M
                                innodb_file_per_table = 1
                                innodb_log_group_home_dir = /var/lib/mysql
                                
                                innodb_buffer_pool_size = 2048M
                                innodb_additional_mem_pool_size = 20M
                                
                                innodb_log_file_size = 512M
                                innodb_log_buffer_size = 64M
                                innodb_flush_log_at_trx_commit = 2
                                innodb_flush_method = O_DIRECT
                                
                                [mysqldump]
                                quick
                                quote-names
                                max_allowed_packet      = 16M
                                
                                [mysql]
                                
                                [isamchk]
                                key_buffer              = 16M
                                
                                !includedir /etc/mysql/conf.d/

                                Comment

                                Working...