Ad Widget

Collapse

How to set an exception on trigger protoypes

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • wim.dochy@premier.fed.be
    Junior Member
    • May 2014
    • 4

    #1

    How to set an exception on trigger protoypes

    Hi,

    I have a discovery protocol rule for discovering FSTYPES.
    I created the item prototypes, then the trigger prototypes.
    Now I want to create an exception in the trigger prototype based on {#FSNAME}.
    ex: I have a discovered nfs volume '/backup/', I still want it monitored, but I don't want a trigger on any volume where {#FSNAME} contains 'backup'

    any ID?

    thanks
    Wim

    Zabbix 2.2.3 running on Cent OS
  • Navern
    Member
    • May 2013
    • 33

    #2
    First solution that comes in mind:
    Create two separate discovery rules. In first one with regexp don't discover filesystems with /backup/ in second one only with backup. However i don't know what to do with unique key.

    As second solution just disable this specific trigger with name /backup/ for every host using API or not.

    Comment

    • wim.dochy@premier.fed.be
      Junior Member
      • May 2014
      • 4

      #3
      thanks for the quick response.

      For the first sollution: indeed the unique key would be a problem, also can I setup a discovery on FSNAME instead of FSTYPE?
      I could use a different key who is similar (percentage free ~ size free)

      For the second: there is no seperate trigger created on each host for every volume (and there are too many hosts with this same volume +- 140 (I'm lazy))

      Is there no possibility to add a filter in the trigger expression:
      'vfs.fs.size[{#FSNAME},pfree].last()}<25 WHERE #FSNAME != backup'

      Comment

      • Navern
        Member
        • May 2013
        • 33

        #4
        Originally posted by [email protected]
        thanks for the quick response.

        For the first sollution: indeed the unique key would be a problem, also can I setup a discovery on FSNAME instead of FSTYPE?
        I could use a different key who is similar (percentage free ~ size free)

        For the second: there is no seperate trigger created on each host for every volume (and there are too many hosts with this same volume +- 140 (I'm lazy))

        Is there no possibility to add a filter in the trigger expression:
        'vfs.fs.size[{#FSNAME},pfree].last()}<25 WHERE #FSNAME != backup'
        For second solution: I believe you should look into the Zabbix API and check if you can mass disable specific trigger with help of API(there are many wrappers for almost all popular languages).

        For first solution: As workaround for unique key you can create your own script on the side of host something like UserParameter=custom.fs.backup_discovery, script_for_fs_discovery.sh. And use it for second discovery rule to discover only backup.

        I am not aware about any means to create tigger prototypes based on {#VALUE} in discovery rules.
        Last edited by Navern; 30-05-2014, 16:00.

        Comment

        • wim.dochy@premier.fed.be
          Junior Member
          • May 2014
          • 4

          #5
          thanks,

          I'll try this on monday.
          Have a nice weekend

          Comment

          • wim.dochy@premier.fed.be
            Junior Member
            • May 2014
            • 4

            #6
            Seems I'm not the only one with this issue.



            ---

            I gave some tought on the given solutions but...
            As long I cannot filter out some volumes on the #FSTYPE discovery rule, I will always have the triggers on the volumes I don't want to be triggered.

            Any Idea how I can see the progress on the link above?

            thanks
            wim

            Comment

            • Strategist
              Member
              • Sep 2013
              • 54

              #7
              I am now faced with the same problem, but in my case, in the auto-discovery I detected switch ports by snmp, and I want to triggers were made only to the first and the second ports.
              About the proposed methods:
              In the first case - we collect and record the same data twice, which is not very good, for large volume of collected data
              and in the second case - it's a frank dirty hack

              Although basically I realize that what we want is contrary to policy zabbix - to collect only the data that we use for further analysis in this case is easily solved by using Regular expressions, but sometimes you really need it
              So it would be nice, if the new versions zabbix it will be implemented

              Comment

              Working...