Ad Widget

Collapse

imetm key zabbix 3.0 vm.memory.size [used

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • VR1387
    Junior Member
    • Jun 2018
    • 8

    #1

    imetm key zabbix 3.0 vm.memory.size [used

    Hello everyone.

    I have the following question.

    When monitoring a centOS 7 or cent6.x, with this item vm.memory.size [used] the value that the graph brings me is the total memory that the server has, should not bring me the total memory used?

    In the attached image you can see the key I'm using for the different values, in two cases I'm doing the two key queries, which are vm.memory.size [used] and this one vm.memory.size [available] , the behavior for all servers is not the same.

    As you can see in the z2 image, it is the same item is not supported, the difference between this item and the other is that I change the values.

    What do I want to say

    In a calculation item

    last ("vm.memory.size [available]") - last ("vm.memory.size [used]") z2 image

    in the other one

    last ("vm.memory.size [used]") - last ("vm.memory.size [available]")

    I read:


    I still do not understand the behavior


    Thank you!
  • kernbug
    Senior Member
    • Feb 2013
    • 330

    #2
    Originally posted by VR1387
    Hello

    Please, show zabbix_get from Zabbix server for vm.memory.size[used] on Centos 7 and Centos 6, also for vm.memory.size[available] on CentOS 7 and CentOS 6.

    And from console (6/7):
    grep MemTotal /proc/meminfo | grep -E --only-matching '[[:digit:]]+'

    Comment

    • kloczek
      Senior Member
      • Jun 2006
      • 1771

      #3
      Just FTR.
      Is someone is using Linux with kernel 4.x memory management paradigm in Linux in those kernels slightly changed and now less important is free vs. used memory and sometimes more important is active vs. inactive.
      This is why zabbix 4.0 will be able support new memory keys like:
      - vm.memory.size[active]
      - vm.memory.size[anon]
      - vm.memory.size[inactive]
      - vm.memory.size[slabs]

      Patch for zabbix 3.4 with those new keys is possible to find on https://support.zabbix.com/browse/ZBX-13233
      http://uk.linkedin.com/pub/tomasz-k%...zko/6/940/430/
      https://kloczek.wordpress.com/
      zapish - Zabbix API SHell binding https://github.com/kloczek/zapish
      My zabbix templates https://github.com/kloczek/zabbix-templates

      Comment

      • VR1387
        Junior Member
        • Jun 2018
        • 8

        #4
        Originally posted by kernbug

        Hello

        Please, show zabbix_get from Zabbix server for vm.memory.size[used] on Centos 7 and Centos 6, also for vm.memory.size[available] on CentOS 7 and CentOS 6.

        And from console (6/7):
        grep MemTotal /proc/meminfo | grep -E --only-matching '[[:digit:]]+'
        Hello

        CentOS 7
        Click image for larger version

Name:	z100.png
Views:	1409
Size:	19.8 KB
ID:	360328
        This

        Click image for larger version

Name:	z101.png
Views:	1309
Size:	10.2 KB
ID:	360330

        htop
        Click image for larger version

Name:	z102.png
Views:	1359
Size:	13.8 KB
ID:	360331
        In Zabbix
        Click image for larger version

Name:	z103.png
Views:	1396
Size:	88.8 KB
ID:	360332


        In no case does it show me the memory usage of the server.
        When I do the calculated item, it roughly gave me an approximate value but what I see is not the real use of the server.


        Attached Files

        Comment

        • VR1387
          Junior Member
          • Jun 2018
          • 8

          #5
          Originally posted by kloczek
          Just FTR.
          Is someone is using Linux with kernel 4.x memory management paradigm in Linux in those kernels slightly changed and now less important is free vs. used memory and sometimes more important is active vs. inactive.
          This is why zabbix 4.0 will be able support new memory keys like:
          - vm.memory.size[active]
          - vm.memory.size[anon]
          - vm.memory.size[inactive]
          - vm.memory.size[slabs]

          Patch for zabbix 3.4 with those new keys is possible to find on https://support.zabbix.com/browse/ZBX-13233
          centOS 6

          Click image for larger version  Name:	z200.png Views:	1 Size:	27.1 KB ID:	360334
          This
          Click image for larger version  Name:	z201.png Views:	1 Size:	13.5 KB ID:	360335

          htop
          Click image for larger version  Name:	z202.png Views:	1 Size:	11.2 KB ID:	360336

          Zabbix
          Click image for larger version  Name:	z203.png Views:	1 Size:	90.4 KB ID:	360337
          In no case does it show me the memory usage of the server.
          When I do the calculated item, it roughly gave me an approximate value but what I see is not the real use of the server.

          One of the closest values is the available memory % that good could do a trigger perfectly.

          Comment

          • kloczek
            Senior Member
            • Jun 2006
            • 1771

            #6
            Originally posted by VR1387
            One of the closest values is the available memory % that good could do a trigger perfectly.
            Code:
            $ grep -i active /proc/meminfo ; uname -a
            Active:          5565924 kB
            Inactive:        1254940 kB
            Active(anon):    4869884 kB
            Inactive(anon):   618044 kB
            Active(file):     696040 kB
            Inactive(file):   636896 kB
            Linux domek 4.18.0-0.rc0.git2.1.fc29.x86_64 #1 SMP Tue Jun 5 20:01:21 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
            Embedding triggest about memory monitoring in kind of base OS type template is really wrong.
            Why?

            Here is a bit longer answer ..
            Because in some exact scenarios application running on exact system may be allocating ALL possible to allocate memory leaving few small scraps and looking on % free/avail memory triggers you may have impression that something is wrong on such system because application is suffers by "memory starvation". In such case you want to have completely opposite type of alarm that on exact system. Such alarm could be used to check application memory allocation settings or to decrease system memory to reduce cost of running platform for exact application.

            Typical case it is DB backend.
            Oracle DB, PostgreSQL, MySQL and other are allocating exact amount of memory and those applications usually opens all files in O_DIRECT. Those applications/services are organizing in allocated memory own caching of the database content. In exact scenarios this may be far more effective than using the same memory as buffer cache.
            Even only on the area like DB backends cases situation may be opposite.
            For example when I'm using SQL engines running on top of the Solaris with ZFS beneath I'm giving for example for MySQL innodb pool only memory to have enough hits/misses ratio into indexes. ZFS caching layer ARC + optional L2ARC and especially with enabled ZFS transparren compression provides way better caching in ARC than MySQL in innodb pool. Why? mostly because ARC keeps in cache compressed content available for the application over VFS layer.
            In such case physical memory used by ARC works so effective like its physical size multiplied by avg compression ration. In other words if your system is using 100-200GB of ARC and data stored in ARC is possible to compress with 10x compression ratio? Such 100GB works like 1TB RAM but without compression.
            If someone is something have been scratching own head asking them self "why today when MySQL or PostgreSQL is for free some people are ready to pay huge licensing costs for Oracle DB?" answer is: because this engine with columnar compression is able to provide in case of some exact DB contents 10-30x compression ratio .. level unreachable for any MySQL or PostgreSQL. Try to only think what is the HW price difference between typical pizza box HW sometimes even with 1TB memory and other HW which would be able provide for single OS image/instance 20-30TB of RAM?

            Other typical representation of those applications which allocates usually as much as it is only possible is JVM.
            Usually as long as java application has nothing to do with local FS interaction someone will be allocation almost 100% possible to allocate memory leaving only memory allowing to login over single ssh session.
            In case of OS layer memory monitoring definitely you would want to be alarmed that OS memory is opverspect or JVM settings are wrong. Isn't it?

            Completely opposite case are as well possible.
            You may have application or service running on exact system which completely relays on OS layer buffer cache and it allocates only small chunk of available memory.
            Typical case it is NFS server.
            Someone who don't know what kind of application is running on such system may have impression that exact system has hugely overspect memory, ad as result of the "to much free memory" alarm may be even trying raise change request to reduce memory on exact system or requesting migrate exact application/service to the physical HW with less memory.
            Of course NFS server on Solaris using ZFS resources is integrated with ZFS caching so in this case it will be only using ZFS ARC/L2ARC so looking on buffer cache metrics provided over kstat you may be looking on wrong metrics.
            On Linux you may be interested SLAB allocator stats but on Solaris you shopuld be looking on ARC/L2ARC monitoring data.


            htop, top or other like vmstat, iostat are nothing more than just tools providing some data which after sampling needs to analysed with other details which those tools cannot even reach or add to generated view of those data.

            And yet another small exampler from my laptop:

            Code:
            $ wc -l /proc/meminfo
            48 /proc/meminfo
            As you see latest Linux provides really reach set of metrics about memory usage.
            Only one of the line from above:
            Code:
            $ grep -i slab /proc/meminfo
            Slab:             708820 kB
            Could be unrolled to:
            Code:
            # wc -l /proc/slabinfo
            150 /proc/slabinfo
            Did you try to ever use slabtop command on Linux?

            Do you see now how far from accurate could be sometimes using only free/avail memory metrics?
            Last edited by kloczek; 14-06-2018, 00:26.
            http://uk.linkedin.com/pub/tomasz-k%...zko/6/940/430/
            https://kloczek.wordpress.com/
            zapish - Zabbix API SHell binding https://github.com/kloczek/zapish
            My zabbix templates https://github.com/kloczek/zabbix-templates

            Comment

            Working...