Ad Widget

Collapse

vm.memory.size[] - invalid first parameter

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • klaauser
    Junior Member
    • Apr 2016
    • 18

    #1

    vm.memory.size[] - invalid first parameter

    I've got six memory-related Items being monitored on my CentOS 7 Zabbix server (NOTE: I'm doing this because used memory was giving me a combination of active, cached, and wired and was not an accurate depiction):
    • Active Memory Used: vm.memory.size[active] | Numeric (unsigned)/Decimal
    • Cached Memory Used: vm.memory.size[cached] | Numeric (unsigned)/Decimal
    • Inactive Memory Used: vm.memory.size[inactive] | Numeric (unsigned)/Decimal
    • Total Memory: vm.memory.size[total] | Numeric (unsigned)/Decimal
    • Wired Memory Used: vm.memory.size[wired] | Numeric (unsigned)/Decimal

    The issue that I'm having is that Cached Memory Used and Total Memory are the only items that are working properly. Active, Inactive, and Wired Memory are all showing Not Supported with the error 'Inactive first parameter'.

    I really can't even fathom what's going on because it's the same syntax for all items and according to the zabbix documentation active,inactive, and wired are all valid parameters.

    Running Zabbix 3.2.1 on CentOS 7.
  • kloczek
    Senior Member
    • Jun 2006
    • 1771

    #2
    Everything is in online documentation.
    On https://www.zabbix.com/documentation...arams#see_also is link to http://blog.zabbix.com/when-alexei-i...vm.memory.size where everything is explained.
    http://uk.linkedin.com/pub/tomasz-k%...zko/6/940/430/
    https://kloczek.wordpress.com/
    zapish - Zabbix API SHell binding https://github.com/kloczek/zapish
    My zabbix templates https://github.com/kloczek/zabbix-templates

    Comment

    • klaauser
      Junior Member
      • Apr 2016
      • 18

      #3
      I have read over all of that documentation. It says that for Linux I should be able to use those parameters. If I am reading something wrong, please tell me, but don't just link to articles without any useful context.

      Comment

      • kloczek
        Senior Member
        • Jun 2006
        • 1771

        #4
        Quote:

        "The main part of development was related to a single memory check: vm.memory.size. The problem pointed out by Pavel was that, for instance, vm.memory.size[used] on FreeBSD did not return very useful information and there was no way to monitor FreeBSD-specific types of memory: active, inactive, wired. Other platforms turned out to have the same problem and it was clear that vm.memory.size had to be redesigned a bit."

        So you are trying to use FreeBSD-specific types of memory on Linux.
        http://uk.linkedin.com/pub/tomasz-k%...zko/6/940/430/
        https://kloczek.wordpress.com/
        zapish - Zabbix API SHell binding https://github.com/kloczek/zapish
        My zabbix templates https://github.com/kloczek/zabbix-templates

        Comment

        • klaauser
          Junior Member
          • Apr 2016
          • 18

          #5
          Ah, that makes sense, sorry for the mistake on my part.

          I guess that makes my question a bit different then...
          My Zabbix server is virtual and has 8GB of RAM on it. vm.memory.size[pused] displays high-in-the-90% memory usage and vm.memory.size[cached] displays between 6 and 7GB.

          Running 'free -m' on the Zabbix server displays, currently, only 895mb of memory used. there are 5119MB in SWAP that are free.

          Looking at the VM's resource usage in vSphere, I see that Active Memory usage is at 327MB.

          Needless to say, I trust both what Linux reports and what vSphere displays more than what Zabbix is picking up, but my question is... why?

          Comment

          • kloczek
            Senior Member
            • Jun 2006
            • 1771

            #6
            Originally posted by klaauser
            Looking at the VM's resource usage in vSphere, I see that Active Memory usage is at 327MB.

            Needless to say, I trust both what Linux reports and what vSphere displays more than what Zabbix is picking up, but my question is... why?
            That is the probably MRU/MFU (Most Recently/Frequently Used) size of the memory used by processes in running queue running in VM.
            Other consequence of above is that if size of active memories used by all VMs is greater than total ~size of L3 CPU caches some of the VMs may be hit by hitting CPU caches bottleneck.

            Funny thing because x86 as CPU architecture has more than few decades history and even latest x86 CPUs are a lot of parts which needs to be implemented as part of its legacy. Other consequence is that all x86 CPUs are .. not well designed to run concurrent high performance demanding VMs on the same machine. Why? Because this architecture does not provide guarantee access for exact VM to guaranteed size of shared CPU caches.
            Only known for me CPU architecture which has CPU caches partitioning per VM is Sparc >=T5 (T5, S7/M7) architectures. With Sparc LDOMs such trick is possible ..

            Measuring flow of the data generated by exact process or VM going in/out CPU caches is (straight) not possible. Even counting memory transactions to RAM is not possible. Exact MBs chipsets have limits of number of GTP to RAM (giga transactions per second) however none of the x86 chipsets (as far as I know .. correct me if I'm wrong) does not provide any registers counting read/write bytes or transactions to/from RAM (to be able spot hitting RAM bottlenecks or hot spots).

            Only way to count CPU caches hits/misses is by using cpc counters (CPU Performance Counters). Solaris with DTrace has nice cpc provider which allows observe such aspects across many processes or system wide (https://docs.oracle.com/cd/E53394_01...-provider.html). Linux does not have such possibilities. Even with existing DTrace Linux implementations because CPC provider is quite hard to port to Linux. (http://crtags.blogspot.co.uk/2012/02...-provider.html)
            On Linux it is possible to use CPC registers data only per process using perf. IIRC valgrind has possibility to program and read CPC registers.
            Nevertheless both tools are completely useless from point of view of monitoring

            As consequence maintaining running concurrently high performance VMs on modern x86* hardware is more like working as wizard than engineer :P

            Only so far known way to deal with performance issues in large scale flat x86 VMs envs is kind "semi/auto adaptative" tactics by constant failover VMs to another less loaded HW instance when some minimal performance degradation is spotted. If you are lucky and if you have enough headroom of HW resources, and if workloads across all VMs are not changing theoretically number of VM migrations should be hitting minimum (even zero migrations in some time unit) providing max possible total all VMs performance .. in practice usually on large scale it is not so shiny as it may look :P
            Problem with this tactics is that is you are unlucky even few very badly behaving VMs may kill quite big number of other completely innocent VMs

            I is nothing new under the Sun .. because today all previous problems in single OS between concurrently running processes/threads are more and more moving to VMs mayer.
            Those problems should be solved the same way by proper resource (cpu, memory, network, storage) management but on VMs layer.

            IMO we are still before transition to new CPUs architecture(s) way better suited to run VMs on top of Linux
            Why? because after using and observing more than 20 years Linux I'm pessimist that in case of Linux it will be possible to solve those issues by for example implementing so robust and well designed processes scheduler as it is in case of Solaris (FSS scheduler) + few other things like Dynamic Reconfiguration subsystem and proper security/namespace separation.
            This probably in case of Linux probably will never happen (because NIH syndrome)
            Last edited by kloczek; 02-12-2016, 02:42.
            http://uk.linkedin.com/pub/tomasz-k%...zko/6/940/430/
            https://kloczek.wordpress.com/
            zapish - Zabbix API SHell binding https://github.com/kloczek/zapish
            My zabbix templates https://github.com/kloczek/zabbix-templates

            Comment

            Working...