Ad Widget

Collapse

We're really needing help now: zabbix DB is becoming way too big!

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • just2blue4u
    Senior Member
    • Apr 2006
    • 347

    #31
    Solved!

    they're essential if you like to do "point in time" restores.
    Right. But with a daily mysqldump, i only need to keep the binlogs from dump to now! So all binlogs older than the dump are just waste of discspace (as i know now)!

    Originally posted by myself
    I Think this is what made the fast growth!
    Tomorrow i'll confirm if the daily growth (1GB/day) is really caused by the logs, but i can't belive it isn't!

    So- big thanks for all who made me wiser and helped me here!!! Thanks alot! *kiss*
    --> confirmed! <-- see this HDD Usage graph Thanks, guys!

    --------
    SOLVED!
    Big ZABBIX is watching you!
    (... and my 48 hosts, 4513 items, 1280 triggers via zabbix v1.6 on CentOS 5.0)

    Comment

    • milprog
      Junior Member
      • Jul 2007
      • 27

      #32
      experience with a system with similar size based on PostgreSQL

      I have a similar system here (62 hosts, 4500 elements, 1500 triggers, abount 900'000 events) running since mid-2007 on a PostgreSQL DB; the hardware is a inexpensive HP ML110G5 (one xeon 3040 w/3 GB RAM) with 4*160 GB SATA disks connected to a built-in SMART 200i controller in a RAID 10 configuration. Of course the system running CentOS 5.1 needed some tuning for this load, but now it runs very smoothly. The database is on a separate LVM partition and currently occupies 46GB. I let the housekeeping take place every 6 hours, which gives the typical CPU load diagram shown in the attachment.
      I started with PostgreSQL 8.2, then I upgraded to 8.3beta2. 8.3rc2 is available now and the final release is just few days away. Zabbix still has some minor problems with 8.3 (requiring a few patches), but for performance reasons I suggest you give PostgreSQL a try as soon as 8.3 comes out. PostgreSQL is rock solid and good for everyone needing a serious database with predictable behaviour.
      Cheers
      --Marcel
      Attached Files

      Comment

      • luquee
        Junior Member
        • Apr 2008
        • 13

        #33
        performance

        HI just2blue4u


        Hi I read around your case with the size of your tables now I'm making the processes mentioned

        I want to ask if after making all the changes to the database, your WEB response times at the site were improved

        Thank you

        Comment

        • milprog
          Junior Member
          • Jul 2007
          • 27

          #34
          Meanwhile I changed several parameters to be stored at longer intervals, e.g. the names of SMTP network interfaces, the disk usage values etc. My database now shrunk to about 46 GB. I moved the database to a separate server (ML110 G5 w/SMART 200i controller and 4 SATA disks). Performance for the web frontend is acceptable; of course I have to wait a little when my query retrieves 10s of thousands of values. When I retrieve e.g. the CPU usage for the last hour, I wait about 4 seconds; when I retrieve the same data for the last year I have to wait for about 20 seconds until the graph appears. Of course my PostgreSQL 8.3 was tuned a little (the standard memory usage values are way too low).

          Please find below the most important changes I made in postgresql.conf (for my 8 GB RAM 64bit CentOS 5.2 system with BBWC array controller).

          I should perhaps mention that I rebuilt my own PostgreSQL RPMs with the --integer-datetimes option for several reasons; AFAIK this will become the default for 8.4. But for zabbix the difference (to "floating point datetime values") should be unnoticeable.

          Regards
          --Marcel

          shared_buffers = 3GB
          temp_buffesrs = 80MB
          work_mem = 512MB
          maintenance_work_mem = 384MB
          max_fsm_pages = 204800
          max_fsm_relations = 10000
          synchronous_commit = off
          checkpoint_segments = 96
          checkpoint_timeout = 5min
          random_page_cost = 3.0
          cpu_operator_cost = 0.0125
          effective_cache_size = 4096MB
          autovacuum = off

          Comment

          Working...