Ad Widget

Collapse

Zabbix agent 3.0 raises 'cannot allocate memory ' in zabbix_agent.log

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • chenh@tjnetsec.com
    Junior Member
    • Jan 2018
    • 2

    #1

    Zabbix agent 3.0 raises 'cannot allocate memory ' in zabbix_agent.log

    Hi ,

    I have installed zabbix server and zabbix agent succesfully and everything seems OK.
    After the agent runs for about every 20 days, it raise frequently an error in zabbix_agent.log and the the agent dies.

    cannot allocate memory

    Environment:
    1) both client and server use CentOS 6.5, the RAM of each is 3G
    2) Zabbix client version(and my tomcat app and logstash are also running on it.):
    zabbix-agent-3.0.9-1.el6.x86_64
    agentd.conf
    Server=zabbix
    StartAgents=1
    ServerActive=zabbix
    Hostname=kdc_prod6
    cat /etc/hosts
    10.1.1.6 kdc_prod6
    10.0.1.14 zabbix
    3) Zabbix server version:
    zabbix-web-3.0.9-1.el6.noarch
    zabbix-agent-3.0.9-1.el6.x86_64
    zabbix-java-gateway-3.0.1-2.el6.x86_64
    zabbix-release-3.0-1.el6.noarch
    zabbix-web-mysql-3.0.9-1.el6.noarch
    zabbix-server-mysql-3.0.9-1.el6.x86_64


    4 of my all servers raise the same error after the agent restarts and runs for 20 days.

    Can someone help?
  • Atsushi
    Senior Member
    • Aug 2013
    • 2028

    #2
    Since the problem of the memory leak has been corrected in the version after 3.0.9, how about trying to upgrade to the latest version?
    The latest version of 3.0 is 3.0.14.

    Comment

    • chenh@tjnetsec.com
      Junior Member
      • Jan 2018
      • 2

      #3
      Thanks Atsushi

      Thank you for your reply.
      Before I posted the thread, I have upgraded the agent on one of my server node to 3.0.14 as you suggested:
      [user951@kdc_prod5 ~]$ rpm -qa|grep zabbix
      zabbix-agent-3.0.14-1.el6.x86_64
      zabbix-release-3.0-1.el6.noarch
      [user951@kdc_prod5 ~]$

      It looks the situation improved much.
      The 'htop' command returns 141M RES memory by process 'zabbix_agent:listener #1' after the node runs about 8 hours(you can think of the nodes are equal in all other configurations and restarted 8 hours before at the same time),compared with the same RES output on the un-upgraded node is 218M.

      I mean the problem has been partly resolved because on one hand, the memory use climbing has been slowed down, but on the other hand, it is climbing though it's slow than ever...and both nodes are consuming more memory in their own pace...

      It strange that the problem is not seen on many other nodes in my company if the node is configured large RAM(for example 32G RAM). I mean the memory amount used by Zabbix agent is stable rather than climbs continuously in this situation.

      Comment

      Working...