Ad Widget

Collapse

Experience with PostgreSQL and a high availability strategy?

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • MartinJanda
    Junior Member
    • Oct 2009
    • 27

    #1

    Experience with PostgreSQL and a high availability strategy?

    Hi,
    I am using zabbix 1.8.2 and PostgreSQL 8.3. Large tables I solved by partitioning built on rules. The second and bigger problem is writing. It's a problem with the disk write throughput. Yes, I can temporarily resolve this issue in new and more powerful hardware, but over time I again run into the ceiling.

    The solution may be a cluster from the PostgreSQL server or something. I studied the possibilities and this tools solves problem with large reading, but not large inserting data in the db.

    Maybe pgCluster is the solution. What are your experiences?
  • DanOliver
    Junior Member
    • Oct 2009
    • 6

    #2
    When it comes to write performance, the first thing I would do is set synchronous_commit=on globally. Zabbix is the ideal environment for such a setting, where you don't want to put the integrity of your entire database at risk, but don't mind loosing a second-or-so of data that was supposedly 'committed' in the rare event of a failure.

    As for clustering, I'd have to ask what are your requirements? Do you need to have multiple management nodes reading from the same database? Is hardware failure your biggest concern?

    For most of our installations, we use DRBD to perform block device replication between machines. An alternative is to use the feature built into recent versions of Postgres to enable WAL shipping.

    For Zabbix, I would be wary of any trigger-based solution -- I would expect performance of your database to drop significantly given the proportion of inserts taking place.

    Comment

    • MartinJanda
      Junior Member
      • Oct 2009
      • 27

      #3
      Thank you for synchronous_commit tip. I try it.

      Requirements?
      The problem is the slowness of the HDD for write to the DB.
      What exactly do you think of multiple management nodes?
      Hardware failure is not the biggest concern. HDD's are in RAID1 and the chances of failure both disks is acceptable.
      Is a block device replication between machines a good solution for large and constant need to write data?

      Yes you are right, with the rules and triggers slow data entry. How else have I solve problem with slow reading from Postgresql when tables are as big as history table (30GB and more)? I do not know any other solution and partitioning is functional.

      Comment

      • DanOliver
        Junior Member
        • Oct 2009
        • 6

        #4
        On the whole block replication works well, but you need to understand the risks and limitations that are introduced. I won't cover them here -- have a look at the DRBD documentation and most should become clear.

        My suggestion would be that you look at implementing WAL replication every few minutes: http://www.postgresql.org/docs/8.3/s...m-standby.html THis will provide a good level of write performance with reasonable resilience without any specific system requirements or database schema modifications beyond what you have already made.

        Comment

        • MartinJanda
          Junior Member
          • Oct 2009
          • 27

          #5
          Thank you. I read the documentation and I think about it.

          Comment

          Working...