Ad Widget

Collapse

Zabbix monitoring Solaris Disks

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • jusavard
    Member
    • Sep 2013
    • 48

    #1

    Zabbix monitoring Solaris Disks

    Hi,
    I'm having some weird issue with Solaris ans zabbix using this zabbix key : vfs.dev.read

    it seems only "sd" disk are working, not "ssd" (SSD does not mean Solid State Drive here ... )

    Code:
    [root@ZabbixServer ~]# zabbix_get -s SolarisServer -k vfs.dev.write[sd0]
    274970416640
    
    [root@ZabbixServer ~]# zabbix_get -s SolarisServer -k fs.dev.write[ssd350]
    ZBX_NOTSUPPORTED
    [root@ZabbixServer ~]# zabbix_get -s SolarisServer -k fs.dev.write[ssd350,operations]
    ZBX_NOTSUPPORTED
    [root@ZabbixServer ~]# zabbix_get -s SolarisServer -k vfs.dev.write[ssd350,bytes]
    Here is my agent version :

    Code:
    root@SolarisServer:/root> /usr/local/sbin/zabbix_agentd -V
    Zabbix Agent (daemon) v2.2.1 (revision 40808) (09 December 2013)
    Compilation time: Dec 24 2013 20:08:37
    Is anyone having the same issues ?
  • aib
    Senior Member
    • Jan 2014
    • 1615

    #2
    Do you mind to show the result of discovery command?
    Code:
     zabbix_agentd -t vfs.fs.discovery
    Sincerely yours,
    Aleksey

    Comment

    • jusavard
      Member
      • Sep 2013
      • 48

      #3
      Hi,

      Here is the output :

      Code:
      root@uhd001:/root> /usr/local/sbin/zabbix_agentd -t vfs.fs.discovery
      vfs.fs.discovery                              [s|{
              "data":[
                      {
                              "{#FSNAME}":"\/",
                              "{#FSTYPE}":"zfs"},
                      {
                              "{#FSNAME}":"\/devices",
                              "{#FSTYPE}":"devfs"},
                      {
                              "{#FSNAME}":"\/system\/contract",
                              "{#FSTYPE}":"ctfs"},
                      {
                              "{#FSNAME}":"\/proc",
                              "{#FSTYPE}":"proc"},
                      {
                              "{#FSNAME}":"\/etc\/mnttab",
                              "{#FSTYPE}":"mntfs"},
                      {
                              "{#FSNAME}":"\/etc\/svc\/volatile",
                              "{#FSTYPE}":"tmpfs"},
                      {
                              "{#FSNAME}":"\/system\/object",
                              "{#FSTYPE}":"objfs"},
                      {
                              "{#FSNAME}":"\/etc\/dfs\/sharetab",
                              "{#FSTYPE}":"sharefs"},
                      {
                              "{#FSNAME}":"\/dev\/fd",
                              "{#FSTYPE}":"fd"},
                      {
                              "{#FSNAME}":"\/var",
                              "{#FSTYPE}":"zfs"},
                      {
                              "{#FSNAME}":"\/tmp",
                              "{#FSTYPE}":"tmpfs"},
                      {
                              "{#FSNAME}":"\/var\/run",
                              "{#FSTYPE}":"tmpfs"},
                      {
                              "{#FSNAME}":"\/export",
                              "{#FSTYPE}":"zfs"},
                      {
                              "{#FSNAME}":"\/export\/home",
                              "{#FSTYPE}":"zfs"},
      
      ....
      I remove the end because it is some mount point for Solaris Zone including their hostname ...

      Comment

      • aib
        Senior Member
        • Jan 2014
        • 1615

        #4
        I don't see any "ssd" device in your list.
        It means, that logical drive cannot be controlled/monitored/etc.

        To control physical device, you need some different (or additional hand made) scripts.
        Sincerely yours,
        Aleksey

        Comment

        • kloczek
          Senior Member
          • Jun 2006
          • 1771

          #5
          Originally posted by jusavard
          Hi,

          Here is the output :

          Code:
          root@uhd001:/root> /usr/local/sbin/zabbix_agentd -t vfs.fs.discovery
          vfs.fs.discovery[] key gives list of mounted filesystems.
          To generate list of available block devices on Solaris you can use in LLD iterator in system.run[] command like:
          Code:
          $ echo | pfexec format | awk 'BEGIN {print "{"; print "\"data\":["; ORS=""; NUM=1} /. .[0-9]\./ {if (NUM!=1) {print ",\n"}; print " { \"{#DISK}\":\"" $2 "\" }"; NUM++} END {print "\n ]\n}\n"}'
          {
          "data":[
           { "{#DISK}":"c7t0d0" },
           { "{#DISK}":"c7t2d0" },
           { "{#DISK}":"c7t3d0" },
           { "{#DISK}":"c7t4d0" }
           ]
          }
          BTW. On Linux to generate exact list of block devices I'm using in my template similar oneliner:
          Code:
          system.run["awk 'BEGIN {print \"{\"; print \"\\"data\\":[\"; ORS=\"\"} {if (NR!=1) {print \",\n\"}; print \" { \\"{#DISK}\\":\\"\" $3 \"\\" }\"} END {print \"\n ]\n}\n\"}' /proc/diskstats"]
          On Linux above generates output like:
          Code:
          $ awk 'BEGIN {print "{"; print "\"data\":["; ORS=""} {if (NR!=1) {print ",\n"}; print " { \"{#DISK}\":\"" $3 "\" }"} END {print "\n ]\n}\n"}' /proc/diskstats
          {
          "data":[
           { "{#DISK}":"ram0" },
           { "{#DISK}":"ram1" },
           { "{#DISK}":"ram2" },
           { "{#DISK}":"ram3" },
           { "{#DISK}":"ram4" },
           { "{#DISK}":"ram5" },
           { "{#DISK}":"ram6" },
           { "{#DISK}":"ram7" },
           { "{#DISK}":"ram8" },
           { "{#DISK}":"ram9" },
           { "{#DISK}":"ram10" },
           { "{#DISK}":"ram11" },
           { "{#DISK}":"ram12" },
           { "{#DISK}":"ram13" },
           { "{#DISK}":"ram14" },
           { "{#DISK}":"ram15" },
           { "{#DISK}":"loop0" },
           { "{#DISK}":"loop1" },
           { "{#DISK}":"loop2" },
           { "{#DISK}":"loop3" },
           { "{#DISK}":"loop4" },
           { "{#DISK}":"loop5" },
           { "{#DISK}":"loop6" },
           { "{#DISK}":"loop7" },
           { "{#DISK}":"zram0" },
           { "{#DISK}":"cciss/c0d0" },
           { "{#DISK}":"cciss/c0d0p1" },
           { "{#DISK}":"cciss/c0d0p2" },
           { "{#DISK}":"dm-0" },
           { "{#DISK}":"dm-1" }
           ]
          }
          After this using LLD regexp is possible to filter off for example ramdisks and loop devices.
          Last edited by kloczek; 12-03-2015, 03:58.
          http://uk.linkedin.com/pub/tomasz-k%...zko/6/940/430/
          https://kloczek.wordpress.com/
          zapish - Zabbix API SHell binding https://github.com/kloczek/zapish
          My zabbix templates https://github.com/kloczek/zabbix-templates

          Comment

          • jusavard
            Member
            • Sep 2013
            • 48

            #6
            Hi by the way i'm not looking in monitoring filesystem but "raw disk". Those disk are used by Oracle as ASM disk so there no real filesystem on it (Neither zfs nor ext4/xfs...)

            Nonetheless i don't get why but it seems that zabbix_get return a "ZBX_NOTSUPPORTED" when using this key vfs.dev.write[ssd350] however I added the Items and created a graphic yesterday and it is still populated...


            Last edited by jusavard; 12-03-2015, 15:46.

            Comment

            • kloczek
              Senior Member
              • Jun 2006
              • 1771

              #7
              Originally posted by jusavard
              Hi by the way i'm not looking in monitoring filesystem but "raw disk". Those disk are used by Oracle as ASM disk so there no real filesystem on it (Neither zfs nor ext4/xfs...)

              Nonetheless i don't get why but it seems that zabbix_get return a "ZBX_NOTSUPPORTED" when using this key vfs.dev.write[ssd350] however I added the Items and created a graphic yesterday and it is still populated...


              vfs.dev.write[] key is used to monitor OS VFS layer activity and as one of the parameters must be passed mount point available on this layer. You cannot use it to monitor Oracle DB ASM. To monitor Oracle ASM you must use SQL queries. ASM metrics you can find in tables like V$ASM_FILESYSTEM, V$ASM_VOLUME, V$ASM_OFSVOLUMES (I'm not sure is that all).
              On accessing to that data you can use zabbix ODBC items. Try to read about zabbix ODBC monitoring.
              Last edited by kloczek; 13-03-2015, 00:11.
              http://uk.linkedin.com/pub/tomasz-k%...zko/6/940/430/
              https://kloczek.wordpress.com/
              zapish - Zabbix API SHell binding https://github.com/kloczek/zapish
              My zabbix templates https://github.com/kloczek/zabbix-templates

              Comment

              • aib
                Senior Member
                • Jan 2014
                • 1615

                #8
                Also, some addons for Zabbix can help to monitor databases, include Oracle.

                For example, DBforBIX
                The freshest version of that product published on Sourceforge
                Sincerely yours,
                Aleksey

                Comment

                • jusavard
                  Member
                  • Sep 2013
                  • 48

                  #9
                  Hi,
                  Sorry for the late reply. I don't know how and why but I generated a graph with the vfs.dev.write/read on ASM disk and even if I don't have data when executing zabbix_get, I do have a complete graph right now...

                  I double and triple check if there is any typo but there is none. I't now working on 8 servers and all of them are generating perfectly correct graph.

                  Comment

                  Working...