Ad Widget

Collapse

Add interface ipv4 address to net.if.discovery

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • Firm
    Senior Member
    • Dec 2009
    • 342

    #1

    Add interface ipv4 address to net.if.discovery

    Hi,

    This adds ipv4 address reporting for net.if.discovery on Linux. Tested on Ubuntu 12.04 and Zabbix 3.0.1. It reports only interfaces with ipv4 addresses.

    upd: Added ipv4/ipv6/noip interface reporting.
    Attached Files
    Last edited by Firm; 05-03-2016, 17:54.
  • kloczek
    Senior Member
    • Jun 2006
    • 1771

    #2
    Please open ASAP jira ticket on support.zabbix.com with this patch.
    Please add some short explanation.
    Such changes in zabbix needs to be reviewed first and opening ticket is first step of such review
    http://uk.linkedin.com/pub/tomasz-k%...zko/6/940/430/
    https://kloczek.wordpress.com/
    zapish - Zabbix API SHell binding https://github.com/kloczek/zapish
    My zabbix templates https://github.com/kloczek/zabbix-templates

    Comment

    • Firm
      Senior Member
      • Dec 2009
      • 342

      #3
      Already done it: https://support.zabbix.com/browse/ZBXNEXT-3170.

      Comment

      • kloczek
        Senior Member
        • Jun 2006
        • 1771

        #4
        Originally posted by Firm
        (correct me if I'm wrong)

        Second though .. I think that you are expecting some exact behavior of net.if.discovery[] which should be not present in case of this key.
        Propose of this build in key is provide json string which will be used as LLD iterator and on the list will be provided list of network interfaces
        Typical U*ix system provides per interface in/out statistics.

        What kind of IP addresses are on those interfaces is not relevant.
        In some scenarios interfaces in up state may even be without IPs and still may relay in/out traffic.
        Example:
        Code:
        $ ip a
        1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
            link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
            inet 127.0.0.1/8 scope host lo
            inet6 ::1/128 scope host 
               valid_lft forever preferred_lft forever
        2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP qlen 1000
            link/ether 00:21:5a:a6:dd:da brd ff:ff:ff:ff:ff:ff
        3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP qlen 1000
            link/ether 00:21:5a:a6:dd:dc brd ff:ff:ff:ff:ff:ff
        4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
            link/ether 00:21:5a:a6:dd:dc brd ff:ff:ff:ff:ff:ff
            inet 172.16.0.195/27 brd 172.16.0.223 scope global bond0
            inet 172.16.0.199/27 brd 172.16.0.223 scope global secondary bond0:2
            inet6 fe80::221:5aff:fea6:dddc/64 scope link 
               valid_lft forever preferred_lft forever
        $ cat /sys/class/net/bond0/bonding/slaves 
        eth0 eth1
        In this exactly case you patch will cause that eth0 and eth1 interfaces will be not present on the result list and I need to monitor how equally (or not) is spread network traffic across slave interfaces

        Similar case is for example on other OSes. Example from Solaris
        Code:
        # ifconfig -a; echo; dladm show-phys; echo; dladm
        lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
                inet 127.0.0.1 netmask ff000000 
        aggr0: flags=100001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,PHYSRUNNING> mtu 1500 index 2
                inet 172.16.0.201 netmask ffffffe0 broadcast 172.16.0.223
                ether d8:d3:85:bf:31:18 
        lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
                inet6 ::1/128 
        aggr0: flags=120002000840<RUNNING,MULTICAST,IPv6,PHYSRUNNING> mtu 1500 index 2
                inet6 ::/0 
                ether d8:d3:85:bf:31:18 
        
        LINK              MEDIA                STATE      SPEED  DUPLEX    DEVICE
        net5              Ethernet             unknown    0      unknown   bnxe5
        net1              Ethernet             up         10000  full      bnxe1
        net0              Ethernet             up         10000  full      bnxe0
        net6              Ethernet             unknown    0      unknown   bnxe6
        net7              Ethernet             unknown    0      unknown   bnxe7
        net3              Ethernet             unknown    0      unknown   bnxe3
        net4              Ethernet             unknown    0      unknown   bnxe4
        net2              Ethernet             unknown    0      unknown   bnxe2
        
        LINK                CLASS     MTU    STATE    OVER
        net5                phys      1500   unknown  --
        net1                phys      1500   up       --
        net0                phys      1500   up       --
        net6                phys      1500   unknown  --
        net7                phys      1500   unknown  --
        net3                phys      1500   unknown  --
        net4                phys      1500   unknown  --
        net2                phys      1500   unknown  --
        aggr0               aggr      1500   up       net0 net1
        BTW .. just fond that on Solaris 11 net.if.discovery does not shows interfaces in up state
        Probably in case of Solaris would be necessary to learn a bit zabbix agent about Solaris Stream DLPIv2

        # /usr/sbin/zabbix_agentd -t net.if.discovery
        net.if.discovery [s|{"data":[{"{#IFNAME}":"lo0"},{"{#IFNAME}":"aggr0"}]}]


        net.if.discovery key should not provid list of IPs.
        If you need such IPs list with informations about which one IP is on which interface.
        In other words net.if.discovery[] provides informations about about OSI layer 2 (physical) and you are expecting that this key will provide higher OSI layers details.
        Last edited by kloczek; 01-03-2016, 22:33.
        http://uk.linkedin.com/pub/tomasz-k%...zko/6/940/430/
        https://kloczek.wordpress.com/
        zapish - Zabbix API SHell binding https://github.com/kloczek/zapish
        My zabbix templates https://github.com/kloczek/zabbix-templates

        Comment

        • Firm
          Senior Member
          • Dec 2009
          • 342

          #5
          Patch uses getifaddrs(3) call which returns linked list of all interfaces found in system. Unfortunately, each interface, e.g. eth0 may be listed more than once (as AF_INET, AF_INET6 or AF_PACKET). And may be listed only once (AF_PACKET). Someone needs to use associative hash to keep different (addressess) types of entity bound to unique interface name. Or build array of structures where all address type are being kept for each interface.

          Comment

          • Firm
            Senior Member
            • Dec 2009
            • 342

            #6
            Added complete interface discovery with ipv4/ipv6/noaddr support.
            Code:
            $ zabbix_agentd -t net.if.discovery
            net.if.discovery                              [s|{"data":[{"{#IFNAME}":"lo","{#IFADDR}":"127.0.0.1","{#IFADDR6}":"::1"},{"{#IFNAME}":"eth0","{#IFADDR}":"X.X.X.X","{#IFADDR6}":"fe80::224:21ff:feXX:XXXX%eth0"},{"{#IFNAME}":"br0","{#IFADDR6}":"fe80::14b6:61ff:feXX:XXXX%br0"}]}]
            Should %eth0/%br0 etc. be removed from output?
            Last edited by Firm; 05-03-2016, 18:32.

            Comment

            Working...