Ad Widget

Collapse

Zabbix is returning false value about idle CPU

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • HarryKalahan
    Member
    • Jan 2014
    • 40

    #1

    Zabbix is returning false value about idle CPU

    Hello all,

    I have a strange problem with cpu monitoring. I have an item with this itemkey:
    system.cpu.util[,idle,avg1]

    It should represent the idle cpu of my Linux Server, but it returns 0 value all the time. I do top in the server and value is between 98 and 100% idle.

    I tried to specify the "all" parameter in the itemKey, but nothing changes.
    system.cpu.util[all,idle,avg1]

    What do you think what's happening?

    I'm using Zabbix 3.2 and in other machine with same version and same config I'm obtaining correct values.

    Thanks in advanced!
  • LenR
    Senior Member
    • Sep 2009
    • 1005

    #2
    Do you have the item defined as float?

    Comment

    • HarryKalahan
      Member
      • Jan 2014
      • 40

      #3
      Yes. It's float.

      Comment

      • Pitons
        Member
        • Oct 2017
        • 49

        #4
        Hi HarryKalahan

        try from zabbix server console:
        zabbix_get -s <IP or Hostname of that server> -k system.cpu.util[,idle]

        Does it returns some values?

        p.s. if you don't have zabbix_get installed it is easily done by yum install zabbix-get (on centos/redhat) or sudo apt-get install zabbix-get (on ubuntu/debian)
        Last edited by Pitons; 07-11-2017, 19:13.

        Comment

        • HarryKalahan
          Member
          • Jan 2014
          • 40

          #5
          Yes it returns 0.

          Code:
          # zabbix_get -s host1 -k system.cpu.util[,idle]
          0.000000
          This is the top for this server:
          top - 12:37:47 up 14 days, 15:54, 1 user, load average: 0,96, 0,57, 0,81
          Tasks: 729 total, 1 running, 728 sleeping, 0 stopped, 0 zombie
          %Cpu(s): 0,0 us, 0,0 sy, 0,0 ni,100,0 id, 0,0 wa, 0,0 hi, 0,0 si, 0,0 st

          From this server I execute the same query to other agent, and I receive correct values:
          Code:
          # zabbix_get -s prd-monitorc-0002 -k system.cpu.util[,idle]
          89.693063

          Comment

          • Pitons
            Member
            • Oct 2017
            • 49

            #6
            Hi,

            Check that hosts zabbix_agent.log for some errors and could you post agent config file? (just mask out sensitive data)

            Sounds like some misconfiguration on that host.

            Comment

            • HarryKalahan
              Member
              • Jan 2014
              • 40

              #7
              Thank you very much for your help.

              I show you the config file of Zabbix Agent:

              Code:
              PidFile=/run/zabbix/zabbix_agentd.pid
              EnableRemoteCommands=1
              LogFile=/var/log/zabbix_agentd.log
              Server=10.100.100.49,10.100.100.51,10.150.223.30
              StartAgents=8
              ServerActive=10.100.100.49
              Hostname=host5
              MaxLinesPerSecond=100
              Timeout=30
              This host is running the zabbix_server processes, too. So this host is monitorized itself.

              Actually, this is a large enviroment where:
              -host1 (Apache2 + MySQL Server + Zabbix Agent)
              -host2, host3, host6, host7 (Zabbix Proxy + Zabbix Agent)
              -host5 (Zabbix Server + Zabbix Agent)

              I was suspecting about a php scripts execution, but althouhg I stopped them I receive the value 0.

              Host1, host5, host6 and host7 receive incorrect idle cpu values, while host2 and host3 get correct values.

              Code:
              # zabbix_get -s host1 -k system.cpu.util[,idle]
              0.000000
              # zabbix_get -s host2 -k system.cpu.util[,idle]
              94.397157
              # zabbix_get -s host3 -k system.cpu.util[,idle]
              99.109786
              # zabbix_get -s host5 -k system.cpu.util[,idle]
              0.000000
              # zabbix_get -s host6 -k system.cpu.util[,idle]
              0.000046
              # zabbix_get -s host7 -k system.cpu.util[,idle]
              0.000025
              At least the zabbix proxy hosts where cloned from host2, so they have the same config. I'm migrating all enviroment from Zabbix 2.0.10 to 3.2.8, so this could affect, but I don't know.

              As a clue, I apreciate some strange behaviour when I execute top on hosts where return 0 values. In the firsts three seconds the idle value is 0, after that the value is between 90 and 100 percent. That doesn't happen on the hosts that return correct values.

              If a do uptime in theese machines the load average is under 1 where each one have 8 vCPUs.

              Code:
              # uptime
               09:44:51 up 15 days, 13:01,  2 users,  load average: 0,61, 0,33, 0,38
              The rest of properties monitored like memory, disk space, etc return correct values. Only idle cpu is affected on theese 4 hosts.

              I hope someone have an idea. Thank you.

              Regards!

              Comment

              • jan.garaj
                Senior Member
                Zabbix Certified Specialist
                • Jan 2010
                • 506

                #8
                Is it any exotic OS, kernel version? Are there any security features enabled selinux, apparmor,...)? Try to increase debug level and check Zabbix agent logs.
                Devops Monitoring Expert advice: Dockerize/automate/monitor all the things.
                My DevOps stack: Docker / Kubernetes / Mesos / ECS / Terraform / Elasticsearch / Zabbix / Grafana / Puppet / Ansible / Vagrant

                Comment

                • HarryKalahan
                  Member
                  • Jan 2014
                  • 40

                  #9
                  It's Debian 9.1 (64bits), recently upgraded from Debian 7.8.

                  Code:
                  # lsb_release -d
                  Description:    Debian GNU/Linux 9.1 (stretch)
                  root@host1:~# uname -a
                  Linux host1 4.9.0-3-amd64 #1 SMP Debian 4.9.30-2+deb9u2 (2017-06-26) x86_64 GNU/Linux
                  I attach the zabbix_agentd.log after raising the DebugLevel to level 5.

                  I think there's nothing interesting in it.

                  Code:
                   38101:20171109:161946.495 End of process_active_checks()
                   38101:20171109:161946.495 In get_min_nextcheck()
                   38101:20171109:161946.495 End of get_min_nextcheck():1510240816
                   38101:20171109:161946.495 __zbx_zbx_setproctitle() title:'active checks #1 [idle 1 sec]'
                   38092:20171109:161947.203 __zbx_zbx_setproctitle() title:'listener #3 [processing request]'
                   38092:20171109:161947.203 Requested [system.cpu.load[]]
                   38092:20171109:161947.203 Sending back [0.270000]
                   38092:20171109:161947.203 __zbx_zbx_setproctitle() title:'listener #3 [waiting for connection]'
                   38089:20171109:161947.377 __zbx_zbx_setproctitle() title:'collector [processing data]'
                   38089:20171109:161947.379 In update_cpustats()
                   38089:20171109:161947.379 End of update_cpustats()
                   38089:20171109:161947.380 __zbx_zbx_setproctitle() title:'collector [idle 1 sec]'
                   38101:20171109:161947.495 In send_buffer() host:'10.100.100.49' port:10051 entries:0/100
                   38101:20171109:161947.495 End of send_buffer():SUCCEED
                   38101:20171109:161947.495 __zbx_zbx_setproctitle() title:'active checks #1 [idle 1 sec]'
                   38093:20171109:161948.256 __zbx_zbx_setproctitle() title:'listener #4 [processing request]'
                   38093:20171109:161948.256 Requested [system.cpu.util[,idle,avg1]]
                   38093:20171109:161948.256 [B]Sending back [0.000000][/B]

                  --
                  Sorry, I can't attach all file. There is very low space to attach files in this forum. I show you the first lines:

                  Code:
                   17808:20171109:161916.169 Got signal [signal:15(SIGTERM),sender_pid:38048,sender_uid:0,reason:0]. Exiting ...
                   17808:20171109:161916.188 Zabbix Agent stopped. Zabbix 3.2.8 (revision 72884).
                   38088:20171109:161916.327 Starting Zabbix Agent [host5]. Zabbix 3.2.8 (revision 72884).
                   38088:20171109:161916.327 **** Enabled features ****
                   38088:20171109:161916.327 IPv6 support:          YES
                   38088:20171109:161916.327 TLS support:           YES
                   38088:20171109:161916.327 **************************
                   38088:20171109:161916.327 using configuration file: /etc/zabbix/zabbix_agentd.conf
                   38088:20171109:161916.327 In zbx_load_modules()
                   38088:20171109:161916.327 End of zbx_load_modules():SUCCEED
                   38088:20171109:161916.327 In init_collector_data()
                   38088:20171109:161916.328 In zbx_dshm_create() proj_id:112 size:0
                   38088:20171109:161916.328 End of zbx_dshm_create():SUCCEED shmid:-1
                   38088:20171109:161916.328 End of init_collector_data()
                   38088:20171109:161916.328 agent #0 started [main process]
                   38089:20171109:161916.328 agent #1 started [collector]
                   38089:20171109:161916.328 In init_cpu_collector()
                   38089:20171109:161916.328 End of init_cpu_collector():SUCCEED
                   38089:20171109:161916.328 __zbx_zbx_setproctitle() title:'collector [processing data]'
                   38089:20171109:161916.328 In update_cpustats()
                   38090:20171109:161916.329 agent #2 started[listener #1]
                   38090:20171109:161916.329 In zbx_tls_init_child()
                   38089:20171109:161916.329 End of update_cpustats()
                   38089:20171109:161916.329 __zbx_zbx_setproctitle() title:'collector [idle 1 sec]'
                   38091:20171109:161916.329 agent #3 started[listener #2]
                   38091:20171109:161916.330 In zbx_tls_init_child()
                   38090:20171109:161916.332 OpenSSL library (version OpenSSL 1.0.1e 11 Feb 2013) initialized
                   38090:20171109:161916.332 End of zbx_tls_init_child()
                   38090:20171109:161916.332 __zbx_zbx_setproctitle() title:'listener #1 [waiting for connection]'
                   38093:20171109:161916.332 agent #5 started[listener #4]
                   38091:20171109:161916.332 OpenSSL library (version OpenSSL 1.0.1e 11 Feb 2013) initialized
                   38093:20171109:161916.332 In zbx_tls_init_child()
                   38091:20171109:161916.332 End of zbx_tls_init_child()
                   38091:20171109:161916.332 __zbx_zbx_setproctitle() title:'listener #2 [waiting for connection]'
                   38096:20171109:161916.334 agent #8 started[listener #7]
                   38096:20171109:161916.334 In zbx_tls_init_child()
                   38092:20171109:161916.335 agent #4 started[listener #3]
                   38092:20171109:161916.335 In zbx_tls_init_child()
                   38096:20171109:161916.337 OpenSSL library (version OpenSSL 1.0.1e 11 Feb 2013) initialized
                   38096:20171109:161916.337 End of zbx_tls_init_child()
                   38096:20171109:161916.337 __zbx_zbx_setproctitle() title:'listener #7 [waiting for connection]'
                   38092:20171109:161916.338 OpenSSL library (version OpenSSL 1.0.1e 11 Feb 2013) initialized
                   38092:20171109:161916.338 End of zbx_tls_init_child()
                   38092:20171109:161916.338 __zbx_zbx_setproctitle() title:'listener #3 [waiting for connection]'
                   38095:20171109:161916.339 agent #7 started[listener #6]
                   38095:20171109:161916.339 In zbx_tls_init_child()
                   38093:20171109:161916.339 OpenSSL library (version OpenSSL 1.0.1e 11 Feb 2013) initialized
                   38093:20171109:161916.339 End of zbx_tls_init_child()
                   38093:20171109:161916.339 __zbx_zbx_setproctitle() title:'listener #4 [waiting for connection]'
                   38094:20171109:161916.339 agent #6 started[listener #5]
                   38094:20171109:161916.339 In zbx_tls_init_child()
                   38095:20171109:161916.341 OpenSSL library (version OpenSSL 1.0.1e 11 Feb 2013) initialized
                   38095:20171109:161916.342 End of zbx_tls_init_child()
                   38095:20171109:161916.342 __zbx_zbx_setproctitle() title:'listener #6 [waiting for connection]'
                   38100:20171109:161916.344 agent #9 started[listener #8]
                   38100:20171109:161916.344 In zbx_tls_init_child()
                   38101:20171109:161916.345 agent #10 started [active checks #1]
                   38101:20171109:161916.345 In zbx_tls_init_child()
                   38094:20171109:161916.347 OpenSSL library (version OpenSSL 1.0.1e 11 Feb 2013) initialized
                   38094:20171109:161916.347 End of zbx_tls_init_child()
                   38094:20171109:161916.347 __zbx_zbx_setproctitle() title:'listener #5 [waiting for connection]'
                   38100:20171109:161916.347 OpenSSL library (version OpenSSL 1.0.1e 11 Feb 2013) initialized
                   38100:20171109:161916.348 End of zbx_tls_init_child()
                   38100:20171109:161916.348 __zbx_zbx_setproctitle() title:'listener #8 [waiting for connection]'
                   38101:20171109:161916.348 OpenSSL library (version OpenSSL 1.0.1e 11 Feb 2013) initialized
                   38101:20171109:161916.349 End of zbx_tls_init_child()
                   38101:20171109:161916.349 In init_active_metrics()
                   38101:20171109:161916.349 buffer: first allocation for 100 elements
                   38101:20171109:161916.349 End of init_active_metrics()
                   38101:20171109:161916.349 In send_buffer() host:'10.100.100.49' port:10051 entries:0/100
                   38101:20171109:161916.349 End of send_buffer():SUCCEED
                   38101:20171109:161916.349 __zbx_zbx_setproctitle() title:'active checks #1 [getting list of active checks]'
                   38101:20171109:161916.349 In refresh_active_checks() host:'10.100.100.49' port:10051
                   38101:20171109:161916.349 sending [{"request":"active checks","host":"host5"}]
                   38101:20171109:161916.349 before read
                   38101:20171109:161916.355 got [{"response":"success","data":[{"key":"agent.version","delay":86400,"lastlogsize":0,"mtime":0},{"key":"kernel.maxproc","delay":86400,"lastlogsize":0,"mtime":0},{"key":"log[\"/var/log/linphoned.log\",,,]","delay":30,"lastlogsize":1251,"mtime":0},{"key":"system.hostname","delay":86400,"lastlogsize":0,"mtime":0},{"key":"system.swap.size[,total]","delay":86400,"lastlogsize":0,"mtime":0},{"key":"system.uname","delay":86400,"lastlogsize":0,"mtime":0},{"key":"vfs.file.cksum[/etc/passwd]","delay":86400,"lastlogsize":0,"mtime":0},{"key":"vfs.file.cksum[/etc/services]","delay":86400,"lastlogsize":0,"mtime":0},{"key":"vfs.fs.size[/,total]","delay":86400,"lastlogsize":0,"mtime":0},{"key":"vm.memory.size[total]","delay":86400,"lastlogsize":0,"mtime":0}]}]
                   38101:20171109:161916.355 In parse_list_of_checks()
                   38101:20171109:161916.355 In add_check() key:'agent.version' refresh:86400 lastlogsize:0 mtime:0
                   38101:20171109:161916.355 End of add_check()
                   38101:20171109:161916.355 In add_check() key:'kernel.maxproc' refresh:86400 lastlogsize:0 mtime:0
                   38101:20171109:161916.355 End of add_check()
                   38101:20171109:161916.355 In add_check() key:'log["/var/log/linphoned.log",,,]' refresh:30 lastlogsize:1251 mtime:0
                   38101:20171109:161916.355 End of add_check()
                   38101:20171109:161916.355 In add_check() key:'system.hostname' refresh:86400 lastlogsize:0 mtime:0
                   38101:20171109:161916.355 End of add_check()
                   38101:20171109:161916.355 In add_check() key:'system.swap.size[,total]' refresh:86400 lastlogsize:0 mtime:0
                   38101:20171109:161916.355 End of add_check()
                   38101:20171109:161916.355 In add_check() key:'system.uname' refresh:86400 lastlogsize:0 mtime:0
                   38101:20171109:161916.355 End of add_check()
                   38101:20171109:161916.355 In add_check() key:'vfs.file.cksum[/etc/passwd]' refresh:86400 lastlogsize:0 mtime:0
                   38101:20171109:161916.355 End of add_check()
                   38101:20171109:161916.355 In add_check() key:'vfs.file.cksum[/etc/services]' refresh:86400 lastlogsize:0 mtime:0
                   38101:20171109:161916.355 End of add_check()
                   38101:20171109:161916.355 In add_check() key:'vfs.fs.size[/,total]' refresh:86400 lastlogsize:0 mtime:0
                   38101:20171109:161916.355 End of add_check()
                   38101:20171109:161916.355 In add_check() key:'vm.memory.size[total]' refresh:86400 lastlogsize:0 mtime:0
                   38101:20171109:161916.355 End of add_check()
                   38101:20171109:161916.355 End of parse_list_of_checks():SUCCEED
                   38101:20171109:161916.356 End of refresh_active_checks():SUCCEED
                   38101:20171109:161916.356 __zbx_zbx_setproctitle() title:'active checks #1 [processing active checks]'
                   38101:20171109:161916.356 In process_active_checks() server:'10.100.100.49' port:10051
                   38101:20171109:161916.356 for key [agent.version] received value [3.2.8]
                   38101:20171109:161916.356 In process_value() key:'host5:agent.version' value:'3.2.8'
                   38101:20171109:161916.356 In send_buffer() host:'10.100.100.49' port:10051 entries:0/100
                   38101:20171109:161916.356 End of send_buffer():SUCCEED
                   38101:20171109:161916.356 buffer: new element 0
                   38101:20171109:161916.356 End of process_value():SUCCEED
                   38101:20171109:161916.356 In need_meta_update() key:agent.version
                   38101:20171109:161916.356 End of need_meta_update():FAIL
                   38101:20171109:161916.356 for key [kernel.maxproc] received value [65536]
                   38101:20171109:161916.356 In process_value() key:'host5:kernel.maxproc' value:'65536'
                   38101:20171109:161916.356 In send_buffer() host:'10.100.100.49' port:10051 entries:1/100
                   38101:20171109:161916.356 send_buffer() now:1510240756 lastsent:1510240756 now-lastsent:0 BufferSend:5; will not send now
                   38101:20171109:161916.356 End of send_buffer():SUCCEED
                   38101:20171109:161916.356 buffer: new element 1
                   38101:20171109:161916.356 End of process_value():SUCCEED
                   38101:20171109:161916.356 In need_meta_update() key:kernel.maxproc
                   38101:20171109:161916.356 End of need_meta_update():FAIL
                   38101:20171109:161916.356 In process_logrt() is_logrt:0 is_count:0 filename:'/var/log/linphoned.log' lastlogsize:1251 mtime:0
                   38101:20171109:161916.356 In add_logfile() filename:'/var/log/linphoned.log' mtime:1510227125 size:1251
                   38101:20171109:161916.356 add_logfile() logfiles:0x5576531c9dc0 logfiles_alloc:64
                   38101:20171109:161916.356 End of add_logfile()
                   38101:20171109:161916.356 process_logrt() old file list:
                   38101:20171109:161916.356    file list empty
                   38101:20171109:161916.356 process_logrt() new file list: (mtime:0 lastlogsize:1251 start_idx:0)
                   38101:20171109:161916.356    nr:0 filename:'/var/log/linphoned.log' mtime:1510227125 size:1251 processed_size:0 seq:0 incomplete:0 dev:64771 ino_hi:0 ino_lo:160 md5size:512 md5buf:997ed6aefbcf05069e7166108ef4b9ca
                   38101:20171109:161916.356 In process_log() filename:'/var/log/linphoned.log' lastlogsize:1251 mtime:0
                   38101:20171109:161916.356 End of process_log() filename:'/var/log/linphoned.log' lastlogsize:1251 mtime:0 ret:SUCCEED processed_bytes:0
                   38101:20171109:161916.356 End of process_logrt():SUCCEED
                   38101:20171109:161916.356 In need_meta_update() key:log["/var/log/linphoned.log",,,]
                   38101:20171109:161916.356 End of need_meta_update():SUCCEED
                   38101:20171109:161916.356 In process_value() key:'host5:log["/var/log/linphoned.log",,,]' value:'(null)'
                   38101:20171109:161916.357 In send_buffer() host:'10.100.100.49' port:10051 entries:2/100
                   38101:20171109:161916.357 send_buffer() now:1510240756 lastsent:1510240756 now-lastsent:0 BufferSend:5; will not send now
                   38101:20171109:161916.357 End of send_buffer():SUCCEED
                   38101:20171109:161916.357 buffer: new element 2
                   38101:20171109:161916.357 End of process_value():SUCCEED
                   38101:20171109:161916.357 for key [system.hostname] received value [host5]
                   38101:20171109:161916.357 In process_value() key:'host5:system.hostname' value:'host5'
                   38101:20171109:161916.357 In send_buffer() host:'10.100.100.49' port:10051 entries:3/100
                   38101:20171109:161916.357 send_buffer() now:1510240756 lastsent:1510240756 now-lastsent:0 BufferSend:5; will not send now
                   38101:20171109:161916.357 End of send_buffer():SUCCEED
                   38101:20171109:161916.357 buffer: new element 3
                   38101:20171109:161916.357 End of process_value():SUCCEED
                   38101:20171109:161916.357 In need_meta_update() key:system.hostname
                   38101:20171109:161916.357 End of need_meta_update():FAIL
                   38101:20171109:161916.357 for key [system.swap.size[,total]] received value [4999606272]
                   38101:20171109:161916.357 In process_value() key:'host5:system.swap.size[,total]' value:'4999606272'
                   38101:20171109:161916.357 In send_buffer() host:'10.100.100.49' port:10051 entries:4/100
                   38101:20171109:161916.357 send_buffer() now:1510240756 lastsent:1510240756 now-lastsent:0 BufferSend:5; will not send now
                   38101:20171109:161916.357 End of send_buffer():SUCCEED
                   38101:20171109:161916.357 buffer: new element 4
                   38101:20171109:161916.357 End of process_value():SUCCEED
                   38101:20171109:161916.357 In need_meta_update() key:system.swap.size[,total]
                   38101:20171109:161916.357 End of need_meta_update():FAIL
                   38101:20171109:161916.357 for key [system.uname] received value [Linux host5 4.9.0-3-amd64 #1 SMP Debian 4.9.30-2+deb9u2 (2017-06-26) x86_64]
                   38101:20171109:161916.357 In process_value() key:'host5:system.uname' value:'Linux host5 4.9.0-3-amd64 #1 SMP Debian 4.9.30-2+deb9u2 (2017-06-26) x86_64'
                   38101:20171109:161916.357 In send_buffer() host:'10.100.100.49' port:10051 entries:5/100
                   38101:20171109:161916.357 send_buffer() now:1510240756 lastsent:1510240756 now-lastsent:0 BufferSend:5; will not send now
                   38101:20171109:161916.357 End of send_buffer():SUCCEED
                   38101:20171109:161916.357 buffer: new element 5
                   38101:20171109:161916.357 End of process_value():SUCCEED
                   38101:20171109:161916.357 In need_meta_update() key:system.uname
                   38101:20171109:161916.357 End of need_meta_update():FAIL
                  Last edited by HarryKalahan; 09-11-2017, 18:12. Reason: file forgotten

                  Comment

                  • HarryKalahan
                    Member
                    • Jan 2014
                    • 40

                    #10
                    Hi again,

                    I think I've found the problem and it's due some of my virtual machines migth be sharing vCPUs so although I have 8 vCPUs assigned to the virtual machines, they should be busy.

                    If I execute the mpstat in host5:
                    # mpstat
                    Linux 4.9.0-3-amd64 (prd-monitorc-0005) 10/11/17 _x86_64_ (8 CPU)

                    Code:
                    13:12:34     CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest  %gnice   %idle
                    13:12:34     all    0,00    0,00    0,00    0,00    0,00    0,00   99,08    0,00    0,00    0,92
                    The steal CPU time is near 100% that could indicate problems with CPU allocation from hypervisors.

                    I'm happy to find this because monitorization is ok. Idle is near 0% all the time although top shows 100% idle after 3 seconds.

                    I can't manage the hypervisor, so I'll wait until the current production machines are shutted down. After that I'll check again I hope this problem disapear.

                    Comment

                    • LenR
                      Senior Member
                      • Sep 2009
                      • 1005

                      #11
                      I've often wanted a big poster that says "you can't trust statistics that a virtual machine reports about itself". I've been "virtual" since at least 1984 :-)

                      Do you have the "tools" installed for the hypervisor platform? I think the awareness of the environment the tools bring may help reported statistics.

                      I've thought about something like "capacity on demand" (That's a stolen phrase) where a Zabbix trigger could dynamically provision more resources, but I completely violates our political environment.

                      Comment

                      • HarryKalahan
                        Member
                        • Jan 2014
                        • 40

                        #12
                        Hi LenR,

                        Thank you for your comments, I'll keep them in mind.

                        I supose that I could install the virtual machine tools, but the hyperversors use Red Hat Virtualization and they are managed by the customer. I don't know this kind of enviroment, although I could ask the customer for this process because it could improve the machines management and the resources allocation.

                        How we are wating to customer shutdown the production machines with old version Zabbix Server (the 6 machines in 2.0), we'll keep theese new ones turned on with Zabbix 3.2, so the shared resources will decrease importantly and we'll check the behaviour of the cpu monitorization.

                        I'll update this post with new information.

                        Thanks all of you for your help.

                        Regards.

                        Comment

                        Working...