Ad Widget

Collapse

Problem with LLD on FS with V2.0.3

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • safpsr
    Member
    • Aug 2007
    • 70

    #1

    Problem with LLD on FS with V2.0.3

    I have a problem with LLD for FileSystems.

    My server is in Zabbix 2.0.3 and my agents are in v2.0.3 too under Aix. I set up a template with items, triggers and graphs prototypes and I added it to my 70 Aix clients.It's works fine for all but not for one.
    This is a Aix 5.3 but I have more than 30 others clients also in Aix 5.3. The two differences are that it has a lot of FileSystems (more than 130) and the default language is French.

    Is that you encountered a limit to the number of FileSystems monitored ?
    I have not found a way to have specific LLD logs to see if something happened ? Is that possible ?

    Thank you in advance
  • safpsr
    Member
    • Aug 2007
    • 70

    #2
    I tried to log activity on the client side. The client send a right list of FS:
    4399222:20121212:113347.608 Requested [vfs.fs.discovery]
    4399222:20121212:113347.609 Sending back [{
    "data":[
    {
    "{#FSNAME}":"\/",
    "{#FSTYPE}":"jfs"},
    {
    "{#FSNAME}":"\/usr",
    "{#FSTYPE}":"jfs"},
    {
    "{#FSNAME}":"\/var",
    "{#FSTYPE}":"jfs"},
    {
    "{#FSNAME}":"\/tmp",
    "{#FSTYPE}":"jfs"},
    {
    "{#FSNAME}":"\/home",
    "{#FSTYPE}":"jfs"},
    {
    "{#FSNAME}":"\/proc",
    "{#FSTYPE}":"procfs"},
    {
    "{#FSNAME}":"\/opt",
    "{#FSTYPE}":"jfs"},
    {
    "{#FSNAME}":"\/ofa",
    "{#FSTYPE}":"jfs"},
    ...

    ...

    "{#FSNAME}":"\/TSAAD1",
    "{#FSTYPE}":"jfs2"},
    {
    "{#FSNAME}":"\/oratmp",
    "{#FSTYPE}":"jfs2"},
    {
    "{#FSNAME}":"\/D\/G\/I",
    "{#FSTYPE}":"jfs2"},
    {
    "{#FSNAME}":"\/home\/download",
    "{#FSTYPE}":"jfs2"},
    {
    "{#FSNAME}":"\/D\/G\/C",
    "{#FSTYPE}":"jfs2"},
    {
    "{#FSNAME}":"\/AutresDoc",
    "{#FSTYPE}":"jfs2"},
    {
    "{#FSNAME}":"\/D\/T\/STA\/PRS\/FIC",
    "{#FSTYPE}":"jfs2"},
    {
    "{#FSNAME}":"\/D\/G\/D",
    "{#FSTYPE}":"jfs2"}]}]

    It seems good. I can't put my server on with a debuglevel=4 because I have more than 700 servers followed.

    An idea ?

    Comment

    • safpsr
      Member
      • Aug 2007
      • 70

      #3
      Here is the log on the server side:
      3610:20121212:140010.048 Sending [vfs.fs.discovery
      ]
      3610:20121212:140010.051 get value from agent result: '{
      "data":[
      {
      "{#FSNAME}":"\/",
      "{#FSTYPE}":"jfs"},
      {
      "{#FSNAME}":"\/usr",
      "{#FSTYPE}":"jfs"},
      {
      "{#FSNAME}":"\/var",
      "{#FSTYPE}":"jfs"},
      ...
      ...
      "{#FSNAME}":"\/D\/G\/C",
      "{#FSTYPE}":"jfs2"},
      {
      "{#FSNAME}":"\/AutresDoc",
      "{#FSTYPE}":"jfs2"},
      {
      "{#FSNAME}":"\/D\/T\/STA\/PRS\/FIC",
      "{#FSTYPE}":"jfs2"},
      {
      "{#FSNAME}":"\/D\/G\/D",
      "{#FSTYPE}":"jfs2"}]}'
      3610:20121212:140010.051 End of get_value():SUCCEED
      3610:20121212:140010.051 In activate_host() hostid:10491 itemid:47050 type:0
      3610:20121212:140010.051 In DBlld_process_discovery_rule() itemid:47050
      3610:20121212:140010.051 query [txnlev:0] [select hostid,key_,status,filter,error,lifetime from items where itemid=47050]
      3610:20121212:140010.051 In substitute_simple_macros() data:'30'
      3610:20121212:140010.051 query [txnlev:1] [begin;]
      3610:20121212:140010.052 DBlld_process_discovery_rule() f_macro:'{#FSTYPE}' f_regexp:'^jfs'
      3610:20121212:140010.052 In DBlld_update_items()
      3610:20121212:140010.052 query [txnlev:1] [select i.itemid,i.name,i.key_,i.type,i.value_type,i.data_ type,i.delay,i.delay_flex,i.history,i.trends,i.sta tus,i.trapper_hosts,i.units,i.multiplier,i.delta,i .formula,i.logtimefmt,i.valuemapid,i.params,i.ipmi _sensor,i.snmp_community,i.snmp_oid,i.port,i.snmpv 3_securityname,i.snmpv3_securitylevel,i.snmpv3_aut hpassphrase,i.snmpv3_privpassphrase,i.authtype,i.u sername,i.password,i.publickey,i.privatekey,i.desc ription,i.interfaceid from items i,item_discovery id where i.itemid=id.itemid and id.parent_itemid=47050]
      3617:20121212:140010.052 Sending [service_state["BlackBerry MDS Connection Service"]
      ]
      3610:20121212:140010.052 In DBlld_check_record() jp_row:'{"{#FSNAME}":"\/","{#FSTYPE}":"jfs"}'
      3610:20121212:140010.053 End of DBlld_check_record():SUCCEED
      3610:20121212:140010.053 In DBlld_make_item()
      3610:20121212:140010.053 In substitute_key_macros() data:'FS_base[{#FSNAME}]'
      3610:20121212:140010.053 In substitute_discovery_macros() data:'{#FSNAME}'
      3610:20121212:140010.053 End of substitute_discovery_macros() data:'/'
      3610:20121212:140010.053 End of substitute_key_macros():SUCCEED data:'FS_base[/]'
      3610:20121212:140010.053 query [txnlev:1] [select distinct i.itemid from items i,item_discovery id where i.itemid=id.itemid and id.parent_itemid=47197 and i.key_='FS_base[/]']
      3610:20121212:140010.053 query [txnlev:1] [select distinct i.itemid,id.key_,i.key_ from items i,item_discovery id where i.itemid=id.itemid and id.parent_itemid=47197]
      3617:20121212:140010.053 get value from agent result: '0'
      3617:20121212:140010.053 End of get_value():SUCCEED
      3617:20121212:140010.053 In activate_host() hostid:10222 itemid:41620 type:0
      3617:20121212:140010.053 End of get_values():1
      3617:20121212:140010.053 poller #8 spent 0.006395 seconds while updating 1 values
      3617:20121212:140010.053 In DCconfig_get_poller_nextcheck() poller_type:0
      3617:20121212:140010.053 End of DCconfig_get_poller_nextcheck():1355317210
      3617:20121212:140010.053 In get_values()
      3617:20121212:140010.053 In DCconfig_get_poller_items() poller_type:0
      3617:20121212:140010.053 End of DCconfig_get_poller_items():1
      3617:20121212:140010.053 In substitute_key_macros() data:'num_lcpu'
      3617:20121212:140010.054 End of substitute_key_macros():SUCCEED data:'num_lcpu'
      3610:20121212:140010.054 query [txnlev:1] [select itemid from items where hostid=10491 and key_='FS_base[/]']
      3610:20121212:140010.054 In substitute_discovery_macros() data:'Base oracle on $1'
      3610:20121212:140010.054 End of substitute_discovery_macros() data:'Base oracle on $1'
      3610:20121212:140010.054 In substitute_key_macros() data:''
      3610:20121212:140010.054 End of substitute_key_macros():FAIL data:''
      3610:20121212:140010.054 query [txnlev:1] [select applicationid from items_applications where itemid=47197]

      3610:20121212:140010.064 End of DBlld_make_item():SUCCEED
      3610:20121212:140010.064 In DBlld_check_record() jp_row:'{"{#FSNAME}":"\/usr","{#FSTYPE}":"jfs"}'
      3610:20121212:140010.064 End of DBlld_check_record():SUCCEED
      3610:20121212:140010.064 In DBlld_make_item()
      3610:20121212:140010.064 In substitute_key_macros() data:'FS_base[{#FSNAME}]'
      3610:20121212:140010.064 In substitute_discovery_macros() data:'{#FSNAME}'
      3610:20121212:140010.064 End of substitute_discovery_macros() data:'/usr'
      3610:20121212:140010.064 End of substitute_key_macros():SUCCEED data:'FS_base[/usr]'
      3610:20121212:140010.064 query [txnlev:1] [select distinct i.itemid from items i,item_discovery id where i.itemid=id.itemid and id.parent_itemid=47197 and i.key_='FS_base[/usr]']
      3610:20121212:140010.065 query [txnlev:1] [select distinct i.itemid,id.key_,i.key_ from items i,item_discovery id where i.itemid=id.itemid and id.parent_itemid=47197]
      3610:20121212:140010.065 query [txnlev:1] [select itemid from items where hostid=10491 and key_='FS_base[/usr]']
      3610:20121212:140010.065 In substitute_discovery_macros() data:'Base oracle on $1'
      3610:20121212:140010.065 End of substitute_discovery_macros() data:'Base oracle on $1'
      3610:20121212:140010.065 In substitute_key_macros() data:''
      3610:20121212:140010.065 End of substitute_key_macros():FAIL data:''
      3610:20121212:140010.065 query [txnlev:1] [select applicationid from items_applications where itemid=47197]
      3610:20121212:140010.065 End of DBlld_make_item():SUCCEED
      3610:20121212:140010.065 In DBlld_check_record() jp_row:'{"{#FSNAME}":"\/var","{#FSTYPE}":"jfs"}'
      3610:20121212:140010.065 End of DBlld_check_record():SUCCEED
      3610:20121212:140010.065 In DBlld_make_item()
      3610:20121212:140010.065 In substitute_key_macros() data:'FS_base[{#FSNAME}]'
      3610:20121212:140010.065 In substitute_discovery_macros() data:'{#FSNAME}'
      3610:20121212:140010.065 End of substitute_discovery_macros() data:'/var'
      3610:20121212:140010.065 End of substitute_key_macros():SUCCEED data:'FS_base[/var]'
      3610:20121212:140010.065 query [txnlev:1] [select distinct i.itemid from items i,item_discovery id where i.itemid=id.itemid and id.parent_itemid=47197 and i.key_='FS_base[/var]']
      3610:20121212:140010.066 query [txnlev:1] [select distinct i.itemid,id.key_,i.key_ from items i,item_discovery id where i.itemid=id.itemid and id.parent_itemid=47197]
      3610:20121212:140010.066 query [txnlev:1] [select itemid from items where hostid=10491 and key_='FS_base[/var]']
      3610:20121212:140010.066 In substitute_discovery_macros() data:'Base oracle on $1'
      3610:20121212:140010.066 End of substitute_discovery_macros() data:'Base oracle on $1'
      3610:20121212:140010.066 In substitute_key_macros() data:''
      3610:20121212:140010.066 End of substitute_key_macros():FAIL data:''

      ...


      Very strange behaviour. Things seems to be right ? "FAIL data" Is it right ?
      And nothing appended in the host items ...

      Comment

      • safpsr
        Member
        • Aug 2007
        • 70

        #4
        I tried another thing. Put a filter to have less FS to follow for this server. And then it works !

        I think it is a limit problem on the FileSystems discovery. Today, I have 132 FS on this machine. How is the limit for LLD ? 100 FS ? 128 ? Less ?

        How to open a new bug report ?

        Thanks

        Comment

        • safpsr
          Member
          • Aug 2007
          • 70

          #5
          Bug report opened

          Bug report opened

          Comment

          Working...