Ad Widget

Collapse

Zabbix Server HDD Space

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • Pradish
    Junior Member
    • Jul 2017
    • 9

    #1

    Zabbix Server HDD Space

    Hi All,
    Have been using Zabbix for last 6 months. Server i used is with 180 GB disk and i initially though this would be sufficient.
    I have added 264 host for monitoring with 47256 items, MySQL DB size it self is now 127 GB and server is running out of disk space.
    I would like help in estimating what should be disk space if I add 600 hosts so that i can migrate the SQL server to another server. Is there anyways to optimize mysql db to consume less disk space?
  • vesper1978
    Member
    • Nov 2016
    • 59

    #2
    If you don't want the DB to get as large, don't monitor as many items and/or store items for a shorter period of time.

    Comment

    • jan.garaj
      Senior Member
      Zabbix Certified Specialist
      • Jan 2010
      • 506

      #3
      Devops Monitoring Expert advice: Dockerize/automate/monitor all the things.
      My DevOps stack: Docker / Kubernetes / Mesos / ECS / Terraform / Elasticsearch / Zabbix / Grafana / Puppet / Ansible / Vagrant

      Comment

      • Pradish
        Junior Member
        • Jul 2017
        • 9

        #4
        I am looking for help in checking if my disk size is sufficient or is there any optimization missed for mysql DB.

        Per Zabbix server reporting average number of values processed is 1.27 K per second and housekeeping is enabled for 270 days.
        Is the below calculation correct?

        270 days*24*3600*(1.27*1000) = 33125760000 values to be stored for 270 days

        33125760000*90 bytes = 2981318400000 bytes = 3 TB disk

        Comment

        • kloczek
          Senior Member
          • Jun 2006
          • 1771

          #5
          You are making some very big overestimation about required disk space.
          I'm almost sure that you don't need to keep all 270 days raw data.
          Zabbix every full hour adds trends point which has max, min and avg value from all last hour raw data.
          With only 15 days raw data (2 weeks + 1 day) and 3 years trends only data you trends data will take about 3% of 15d raw data.
          Another thing is that you should not be using flat tables but partitioned ones.
          All history* and tends* tables should be partitioned on at least daily (history*) and monthly (trends*) database.
          Why? Because instead deleting one time a day housekeeping history tables by DELETE queries to delete oldest data it will be done only few IOs delete file with oldest partitions and create new one in advance.

          So ..

          15*24*3600*(1.27*1000)*1.03*90 = 152 576 784 000 ~= 142GB
          http://uk.linkedin.com/pub/tomasz-k%...zko/6/940/430/
          https://kloczek.wordpress.com/
          zapish - Zabbix API SHell binding https://github.com/kloczek/zapish
          My zabbix templates https://github.com/kloczek/zabbix-templates

          Comment

          • kloczek
            Senior Member
            • Jun 2006
            • 1771

            #6
            BTW: exact ratio size of the history:trends data depends on sampling rate. Trends data size depends only on number of scalar items *there is no of course trends data for metrics which are producing strings). Size of the raw history data depends on number of items and avg sampling rate.
            In other words: more frequently some metrics are sampled than proportionally less will be necessary to reserve for trends.
            This ratio which I used in estimation equation like +3% for trends data may be sometimes even lower and fluctuate up to (let's say) 15%.
            However even with +20% and assumption that you don't need to keep so long raw history data still it will be waaaay less than your first estimation.

            Nevertheless in real estimation on top of above should be added other factors like:
            - up to +100% disk space to be able to optimize tables
            - if you are going to use ZFS and transparent compression. gzip-1 is able to reduce whole physical disk space by up to 4 times (no you cannot gain the same compression ratio by altering history*/trends* tables to use compression)
            - if you are going to use Oracle DB columnar compression additionally total physical disk space could be reduced by even more than factor 10.

            Compression is very effective as using it you will need as well much less physical memory to cache MRU/MFU data (size of the RAM in case MySQL should be at least half day raw data size). as those data will be cached in memory in compressed form.
            Such compression is key to gain very high write new data speed as every INSERT and UPDATE queries are generating a lot of read IOs before something will be updated or added to physical storage.
            http://uk.linkedin.com/pub/tomasz-k%...zko/6/940/430/
            https://kloczek.wordpress.com/
            zapish - Zabbix API SHell binding https://github.com/kloczek/zapish
            My zabbix templates https://github.com/kloczek/zabbix-templates

            Comment

            • Pradish
              Junior Member
              • Jul 2017
              • 9

              #7
              Thanks kloczek,
              I have set history for 7 days and trends for 365 days and observed DB size is not growing.

              Comment

              Working...