Hello,
I'm using the zabbix/zabbix-appliance:ubuntu-4.4.6 Docker image on a Kubernetes cluster running over minikube on an Ubuntu 21.10 laptop.
I have an issue where the Zabbix web server is refusing connections, and after looking into it I saw that the MySQL server is not running in the container.
Some investigation has shown that the OOM killer terminates the process.
I've allocated 16GB of memory and 8 CPUs to the minikube VM. The laptop itself has 32 GB of memory.
After the MySQL server is killed, if I start it back up again by running "service mysql start" in the container, it goes into a loop of allocating memory endlessly -> getting killed by the OOM killer -> process restarted -> repeat.
It also uses a lot of CPU during that process, and goes from megabytes to the full 16GB in about 8 seconds each time.
Running top on the container in this situation I get:
Whereas for a coworker of mine who runs the exact same cluster but on a Mac laptop, the result on his stable cluster is:
I have another coworker who has the same Linux + minikube setup as mine (except that his version of Ubuntu is somewhat older) for whom it also works fine.
If anyone has any idea as to what might be causing it or how I can troubleshoot this issue, any help would be greatly appreciated.
Thanks in advance,
Tomer.
I'm using the zabbix/zabbix-appliance:ubuntu-4.4.6 Docker image on a Kubernetes cluster running over minikube on an Ubuntu 21.10 laptop.
I have an issue where the Zabbix web server is refusing connections, and after looking into it I saw that the MySQL server is not running in the container.
Some investigation has shown that the OOM killer terminates the process.
I've allocated 16GB of memory and 8 CPUs to the minikube VM. The laptop itself has 32 GB of memory.
After the MySQL server is killed, if I start it back up again by running "service mysql start" in the container, it goes into a loop of allocating memory endlessly -> getting killed by the OOM killer -> process restarted -> repeat.
It also uses a lot of CPU during that process, and goes from megabytes to the full 16GB in about 8 seconds each time.
Running top on the container in this situation I get:
Code:
# top
top - 09:15:50 up 2 days, 23:10, 0 users, load average: 1.62, 2.22, 1.49
Tasks: 60 total, 2 running, 58 sleeping, 0 stopped, 0 zombie
%Cpu(s): 3.3 us, 11.3 sy, 0.0 ni, 85.2 id, 0.2 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 16418592 total, 434496 free, 14550088 used, 1434008 buff/cache
KiB Swap: 0 total, 0 free, 0 used. 455796 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2491 mysql 20 0 16.065g 0.011t 10408 R 100.0 69.9 0:03.57 mysqld
Code:
# top top - 09:12:39 up 2 days, 23:11, 0 users, load average: 0.34, 0.73, 0.66 Tasks: 58 total, 1 running, 57 sleeping, 0 stopped, 0 zombie %Cpu(s): 2.7 us, 3.4 sy, 0.0 ni, 93.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem : 11234836 total, 1967128 free, 3543852 used, 5723856 buff/cache KiB Swap: 1048572 total, 1048572 free, 0 used. 7033680 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 222 mysql 20 0 3675076 261972 20640 S 0.3 2.3 0:39.30 mysqld
If anyone has any idea as to what might be causing it or how I can troubleshoot this issue, any help would be greatly appreciated.
Thanks in advance,
Tomer.