Ad Widget

Collapse

MySQL database is down time by time

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • ZabbixUser2020
    Junior Member
    • Jan 2020
    • 13

    #1

    MySQL database is down time by time

    Hi there,

    need some advices. Our env.:

    Ubuntu 16.04, 2x cores, 4GB RAM

    Parameter | Value | Details
    Zabbix server is running | Yes | localhost:10051
    Number of hosts (enabled/disabled/templates) | 85 | 60 / 2 / 23
    Number of items (enabled/disabled/not supported) | 9256 | 8152 / 478 / 626
    Number of triggers (enabled/disabled [problem/ok]) | 1965 | 1564 / 401 [43 / 1521]
    Number of users (online) | 29 | 2
    Required server performance, new values per second | 118.07​


    | innodb_version | 5.6.42-84.2
    | protocol_version | 10
    | version | 10.0.38-MariaDB-0ubuntu0.16.04.1
    | version_comment | Ubuntu 16.04
    | version_compile_os | debian-linux-gnu
    | version_malloc_library | bundled jemalloc


    innodb_buffer_pool_instances = 8 (default)
    innodb_buffer_pool_size = 128MB
    innodb_buffer_pool_chunk_size = not possible to retrieve, as MySQL is too old version.


    Housekeeping runs every 1h, drops around 300k+ records every time. ​Runs for about 2-10min. each.
    Housekeeping retention stuff is set to pretty low values, all values up to 1 week, apart "Trends" 365d. and "Audit" 180d.
    The queue is pretty empty, apart some spikes. But normally some records stay no longer than 2-3min.

    ###The problems:
    Around 50x times per day we get such a Zabbix server problems:
    Too many processes on Zabbix server
    Disk I/O is overloaded on Zabbix server

    Once in a month or two - MySQL database "zabbix" on "localhost" is not available: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111)​

    Regarding MySQL settings, increased innodb_buffer_pool_size from 128MB ---> 2GB.
    Now we got even more issues, loads of messages:
    Too many processes on Zabbix
    Disk I/O is overloaded on Zabbix server
    Less than 25% free in the history cache
    Zabbix unreachable poller processes more than 75% busy
    OS-Conifg agent on Zabbix server is unreachable for 5 minutes
    Zabbix history syncer processes more than 75% busy


    And of course below one for some time:
    MySQL database "zabbix" on "localhost" is not available: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111)​

    What are your suggestions to improve performance?​
  • tim.mooney
    Senior Member
    • Dec 2012
    • 1427

    #2
    You provided a lot of very good information about your environment, but I don't see anything about what version of Zabbix you're using.

    Is this install in a VM environment? If so, can you give the VM more RAM? For example, if you increase the RAM for the VM to 12 GiB and set 'innodb_buffer_pool_size' to 8G, do the issues go away?

    The basic issue appears to be that your system is I/O overloaded, probably because of database activity. If you're using traditional disks ("spinning rust") for the volume the database is installed upon, you might be able to fix the issue by changing that filesystem so it's backed by storage with more I/O operations, but I would try more RAM as a solution first. The more memory you can devote to MariaDB, in particular the innodb_buffer_pool_size, the better it's going to perform.

    As far as the periodic messages about "is not available": have you looked through your system logs, to see if you see any messages with the text "oom" in them at about the same time that MariaDB was unavailable? I'm suspicious that the oom-killer is killing the database to free memory in extreme low-memory situations.

    Comment

    • ZabbixUser2020
      Junior Member
      • Jan 2020
      • 13

      #3
      tim.mooney Environment is in a cloud, and yes, it is an Ubuntu VM running Zabbix server and MySQL/MariaDB. Ubuntu 16.04, 2x cores, 4GB RAM, Premium SSD OS disk (and Zabbix installed there). Zabbix server version - Zabbix 4.4.10

      Firstly I eliminated an option lack of RAM memory, because looking at Zabbix VM RAM metrics, seems like available memory for the server is always more than enough, barely utilizes more than 2GB. . Please ignore the last fluctuations since 8th of Sep, as I set "pool_buffer_size" 128M ---> 2G, then once got more issues, reduced to 1G at 9th of Sep.
      Click image for larger version

Name:	image.png
Views:	995
Size:	441.3 KB
ID:	451284

      But now I assume these metrics were misleading me, it always had enough memory, but because "pool_buffer_size=128M" default value been set and we never utilized full RAM capabilities, am I right?
      Below are some latest problems for the last day (buffer_size =1G) and some adjustments to Zabbix config settings done as well:
      CacheSize=24M --- > 64M
      HistoryCacheSize 16M (Def)----> 32M
      HistoryIndexCacheSize=4M (Def) ---> 8M​
      Click image for larger version

Name:	image.png
Views:	1026
Size:	125.4 KB
ID:	451285


      Do you suggest something like 8RAM and "buffer_size" = 4G would solve the issues?

      Comment

      • ZabbixUser2020
        Junior Member
        • Jan 2020
        • 13

        #4
        tim.mooney I could not find anything of "oom" logs, unless I am searching it wrongly.
        Anyway, some info before crash.
        > /var/log/syslog​.2:
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: Pending reads 0
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: Pending writes: LRU 0, flush list 9, single page 0
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: Pages made young 295, not young 0
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: 0.00 youngs/s, 0.00 non-youngs/s
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: Pages read 8708, created 1914, written 99951
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: 0.00 reads/s, 0.00 creates/s, 0.00 writes/s
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: LRU len: 10622, unzip_LRU len: 0
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: I/O sum[0]:cur[0], unzip sum[0]:cur[0]
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: ---BUFFER POOL 7
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: Buffer pool size 16383
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: Buffer pool size, bytes 268419072
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: Free buffers 5456
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: Database pages 10815
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: Old database pages 3978
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: Modified db pages 1952
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: Percent of dirty pages(LRU & free pages): 11.996
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: Max dirty pages percent: 75.000
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: Pending reads 0
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: Pending writes: LRU 0, flush list 9, single page 0
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: Pages made young 279, not young 0
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: 0.00 youngs/s, 0.00 non-youngs/s
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: Pages read 8078, created 2737, written 95093
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: 0.00 reads/s, 0.00 creates/s, 0.00 writes/s
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: No buffer pool page gets since the last printout
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: LRU len: 10815, unzip_LRU len: 0​
        ...
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: Read view trx id 2365638285
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: Read view trx id 2365638403
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: Read view trx id 2365638615
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: Read view trx id 2365638680
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: Read view trx id 2365638681
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: -----------------
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: Main thread process no. 6977, id 139930319701760, state: enforcing dict cache limit
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: Number of rows inserted 4447728, updated 94649, deleted 4123296, read 44661310
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: 0.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 0.00 reads/s
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: Number of system rows inserted 0, updated 0, deleted 0, read 0
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: 0.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 0.00 reads/s
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: ----------------------------
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: END OF INNODB MONITOR OUTPUT
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: ============================
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: InnoDB: ###### Diagnostic info printed to the standard error stream
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: InnoDB: Error: semaphore wait has lasted > 600 seconds
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: InnoDB: We intentionally crash the server, because it appears to be hung.
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: 2022-09-08 22:37:12 7f44117fd700 InnoDB: Assertion failure in thread 139930328094464 in file srv0srv.cc$
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: InnoDB: We intentionally generate a memory trap.
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: InnoDB: Submit a detailed bug report to https://jira.mariadb.org/
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: InnoDB: If you get repeated assertion failures or crashes, even
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: InnoDB: immediately after the mysqld startup, there may be
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: InnoDB: corruption in the InnoDB tablespace. Please refer to
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: InnoDB: http://dev.mysql.com/doc/refman/5.6/...-recovery.html
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: InnoDB: about forcing recovery.
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: 220908 22:37:12 [ERROR] mysqld got signal 6 ;
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: This could be because you hit a bug. It is also possible that this binary
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: or one of the libraries it was linked against is corrupt, improperly built,​
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: This could be because you hit a bug. It is also possible that this binary
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: or one of the libraries it was linked against is corrupt, improperly built,
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: or misconfigured. This error can also be caused by malfunctioning hardware.
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld:
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: To report this bug, see https://mariadb.com/kb/en/reporting-bugs
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld:
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: We will try our best to scrape up some info that will hopefully help
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: diagnose the problem, but since we have already crashed,
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: something is definitely wrong and this may fail.
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld:
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: Server version: 10.0.38-MariaDB-0ubuntu0.16.04.1
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: key_buffer_size=16777216
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: read_buffer_size=131072
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: max_used_connections=91
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: max_threads=153
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: thread_count=81
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: It is possible that mysqld could use up to
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 352330 K bytes of memory
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: Hope that's ok; if not, decrease some variables in the equation.
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld:
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: Thread pointer: 0x0
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: Attempting backtrace. You can use the following information to find out
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: where mysqld died. If you see no messages after this, something went
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: terribly wrong...
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: stack_bottom = 0x0 thread_stack 0x30000
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: /usr/sbin/mysqld(my_print_stacktrace+0x3d)[0xc23d6d]
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: /usr/sbin/mysqld(handle_fatal_signal+0x3bf)[0x7486af]
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: /lib/x86_64-linux-gnu/libpthread.so.0(+0x11390)[0x7f44aa8c8390]
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: /usr/sbin/mysqld(handle_fatal_signal+0x3bf)[0x7486af]
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: /lib/x86_64-linux-gnu/libpthread.so.0(+0x11390)[0x7f44aa8c8390]
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x38)[0x7f44a9c93438]
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: /lib/x86_64-linux-gnu/libc.so.6(abort+0x16a)[0x7f44a9c9503a]
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: /usr/sbin/mysqld[0x9b7814]
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: /lib/x86_64-linux-gnu/libpthread.so.0(+0x76ba)[0x7f44aa8be6ba]
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f44a9d6551d]
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: information that should help you find out what is causing the crash.
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld_safe: Number of processes running now: 0
        Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld_safe: mysqld restarted
        Sep 8 22:37:13 lv-zabbix-we-vm1 mysqld: 220908 22:37:13 [Note] /usr/sbin/mysqld (mysqld 10.0.38-MariaDB-0ubuntu0.16.04.1) starting as process 60$
        Sep 8 22:37:13 lv-zabbix-we-vm1 mysqld: 220908 22:37:13 [Note] InnoDB: Using mutexes to ref count buffer pool pages
        Sep 8 22:37:13 lv-zabbix-we-vm1 mysqld: 220908 22:37:13 [Note] InnoDB: The InnoDB memory heap is disabled
        Sep 8 22:37:13 lv-zabbix-we-vm1 mysqld: 220908 22:37:13 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
        Sep 8 22:37:13 lv-zabbix-we-vm1 mysqld: 220908 22:37:13 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier
        Sep 8 22:37:13 lv-zabbix-we-vm1 mysqld: 220908 22:37:13 [Note] InnoDB: Compressed tables use zlib 1.2.8
        Sep 8 22:37:13 lv-zabbix-we-vm1 mysqld: 220908 22:37:13 [Note] InnoDB: Using Linux native AIO
        Sep 8 22:37:13 lv-zabbix-we-vm1 mysqld: 220908 22:37:13 [Note] InnoDB: Using CPU crc32 instructions
        Sep 8 22:37:13 lv-zabbix-we-vm1 mysqld: 220908 22:37:13 [Note] InnoDB: Initializing buffer pool, size = 2.0G
        Sep 8 22:37:13 lv-zabbix-we-vm1 mysqld: 220908 22:37:13 [Note] InnoDB: Completed initialization of buffer pool
        Sep 8 22:37:13 lv-zabbix-we-vm1 mysqld: 220908 22:37:13 [Note] InnoDB: Highest supported file format is Barracuda.
        Sep 8 22:37:13 lv-zabbix-we-vm1 mysqld: 220908 22:37:13 [Note] InnoDB: Starting crash recovery from checkpoint LSN=5004638288471
        Sep 8 22:37:13 lv-zabbix-we-vm1 mysqld: 220908 22:37:13 [Note] InnoDB: Restoring possible half-written data pages from the doublewrite buffer...
        Sep 8 22:37:13 lv-zabbix-we-vm1 mysqld: InnoDB: Database page corruption or a failed
        Sep 8 22:37:13 lv-zabbix-we-vm1 mysqld: InnoDB: file read of space 430 page 603057.
        Sep 8 22:37:13 lv-zabbix-we-vm1 mysqld: InnoDB: Trying to recover it from the doublewrite buffer.
        Sep 8 22:37:13 lv-zabbix-we-vm1 mysqld: 220908 22:37:13 [Note] InnoDB: Recovered the page from the doublewrite buffer.​​

        Comment

        • tim.mooney
          Senior Member
          • Dec 2012
          • 1427

          #5
          Originally posted by ZabbixUser2020

          Code:
          Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: InnoDB: Error: semaphore wait has lasted > 600 seconds
          Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: InnoDB: We intentionally crash the server, because it appears to be hung.
          Code:
          Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: It is possible that mysqld could use up to
          Sep 8 22:37:12 lv-zabbix-we-vm1 mysqld: key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 352330 K bytes of memory
          ​​
          MySQL is intentionally aborting because it has been waiting more than 5 minutes on a semaphore. That probably points to a situation where the operating system is not in a healthy state, but what the underlying cause is is hard to guess without knowing more about what was happening on the system at the time. My speculation is that it was under extreme I/O load, but you've indicated that it's using SSDs, so that's a little less likely (but not impossible).

          If I were in your situation, I would probably try a larger memory VM, if that's moderately easy to do. If 8GiB is the next size up, trying that AND increasing innodb_buffer_pool_size to either 4G or 5G would be my next test.

          With a 4GiB VM and an "all-in-one" Zabbix install (database + Zabbix server + Zabbix web front-end), you're at the absolute minimum size, so balancing how much memory to give the database vs. how much memory the OS + Zabbix server + web bits all need is very tricky. Even small adjustments can be complicated. Once you get to an 8 GiB memory size, it becomes a little easier: save half the RAM for the OS + Zabbix server + zabbix web, give the other half to the database. As the memory size of the VM gets larger, a larger % of it should be given to the database.

          With your cloud provider, when you select the "Premium SSD" option, do they automatically apply any MySQL/MariaDB tuning settings to /etc/mysql.cnf or /etc/mysql.d/*.conf ? I'm just wondering if there are any tuning settings related to I/O and SSDs that are "automatically" in place in the MariaDB config? With the version of MariaDB you're using, there's some (at times, conflicting) recommendations about making I/O settings changes to let MariaDB know that the database is on an SSD. There are both pros and cons with adjusting those I/O settings, from what I've read, so it might be necessary to do a bunch of research to find out whether they should be increased or not. If you do spend any time researching those, you can put a lot more faith in what the experts at Percona say than other information you may find on the net. The tricky part is that the recommendations may change with the version of MariaDB in use, and I'm not really sure the 10.0.x version was well-researched.

          Good luck and please update this thread as you proceed with your debugging efforts.

          Comment

          • ZabbixUser2020
            Junior Member
            • Jan 2020
            • 13

            #6
            tim.mooney upgraded server to 2x cores, 8 RAM.

            What values would you suggest for below ones? As currently they are:
            LogFileSize=0
            StartPollers=80
            # HousekeepingFrequency=1
            # MaxHousekeeperDelete=5000
            #StartPollersUnreachable=80
            CacheSize=128M (this I have increased from 24M)
            HistoryCacheSize=64M (this I have increased from 16M default value)
            HistoryIndexCacheSize=16M (this I have increased from 4M default value)
            LogSlowQueries=3000

            Housekeeping values are around before upscale VM and amending some Zabbix config values:
            14443:20220912:065043.110 housekeeper [deleted 376135 hist/trends, 0 items/triggers, 25 events, 5 problems, 1 sessions, 0 alarms, 64 audit, 0 records in 114.670025 sec,
            14443:20220912:075240.331 housekeeper [deleted 375877 hist/trends, 0 items/triggers, 18 events, 7 problems, 1 sessions, 0 alarms, 127 audit, 0 records in 116.709359 sec,
            14443:20220912:085432.038 housekeeper [deleted 375554 hist/trends, 0 items/triggers, 17 events, 1 problems, 1 sessions, 0 alarms, 125 audit, 0 records in 111.176559 sec,
            ...
            After increase up to 8GB of RAM:
            16372:20220912:125327.070 housekeeper [deleted 318117 hist/trends, 0 items/triggers, 34 events, 3 problems, 1 sessions, 0 alarms, 104 audit, 0 records in 224.325961 sec,
            16372:20220912:135531.031 housekeeper [deleted 405427 hist/trends, 0 items/triggers, 22 events, 1 problems, 3 sessions, 0 alarms, 129 audit, 0 records in 123.445167 sec,

            Also, I am still getting Zabbix trigger of "Too many processes", trigger value is 300, may it is too low?
            Last edited by ZabbixUser2020; 12-09-2022, 15:56.

            Comment

            • tim.mooney
              Senior Member
              • Dec 2012
              • 1427

              #7
              Originally posted by ZabbixUser2020

              Also, I am still getting Zabbix trigger of "Too many processes", trigger value is 300, may it is too low?
              If all of the other warnings have gone away, then I would say most of the I/O issues have been taken care of by the increase in RAM and the tuning settings you changed.

              I don't know what to tell you about the "too many processes" trigger. That's not one we use. My guess is that you should indeed adjust the threshold to account for the number of processes your install needs.

              Comment

              Working...