Ad Widget

Collapse

Fine tuning PSQL (wth timescaledb ext)

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • shafuq
    Junior Member
    • Oct 2024
    • 17

    #1

    Fine tuning PSQL (wth timescaledb ext)

    I've setup a Zabbix central server with a few proxies. All of them have postgresql as their db. (ubuntu/nginx/postgresql/zbx7.0.5)

    The servers have the following resources:
    - zabbix_server: 4xCore, 16GB RAM, 100GB Disk
    - zabbix_proxies: 2xCore, 8GB RAM, 60GB Disk.

    My question is regarding fine tuning the DBs.

    I went ahead and tuned the proxy server:

    sudo vim /etc/postgresql/16/main/postgresql.conf :

    # DB Version: 16
    # OS Type: linux
    # DB Type: mixed
    # Total Memory (RAM): 8 GB
    # CPUs num: 2
    # Connections num: 280
    # Data Storage: san

    max_connections = 280
    shared_buffers = 2GB
    effective_cache_size = 6GB
    maintenance_work_mem = 512MB
    checkpoint_completion_target = 0.9
    wal_buffers = 16MB
    default_statistics_target = 100
    random_page_cost = 1.1
    effective_io_concurrency = 300
    work_mem = 1872kB
    huge_pages = off
    min_wal_size = 1GB
    max_wal_size = 4GB

    I got these values from:

    TBH I'm not sure what they should be. I hope the values are better than came out of the box.

    For the main server I'm using TimescaleDB alongside psql for compression.

    So here is what I would like to know. Should I modify the postgresql.conf for the main zabbix server like I did with the proxy? The reason I'm asking is it uses TimescaleDB extension. I don't want to break anything there.

    If I did modify the conf file it would be something like this:

    # DB Version: 16
    # OS Type: linux
    # DB Type: mixed
    # Total Memory (RAM): 16 GB
    # CPUs num: 4
    # Connections num: 1024
    # Data Storage: san

    max_connections = 1024
    shared_buffers = 4GB
    effective_cache_size = 12GB
    maintenance_work_mem = 1GB
    checkpoint_completion_target = 0.9
    wal_buffers = 16MB
    default_statistics_target = 100
    random_page_cost = 1.1
    effective_io_concurrency = 300
    work_mem = 1MB
    huge_pages = off
    min_wal_size = 1GB
    max_wal_size = 4GB
    max_worker_processes = 4
    max_parallel_workers_per_gather = 2
    max_parallel_workers = 4
    max_parallel_maintenance_workers = 2

    PS: The max_connections part is something I'm not sure about. I just gave it a random higher number than default. Please enlighten me on how to pick a number here.
    Last edited by shafuq; 07-11-2024, 10:23.
Working...