Ad Widget

Collapse

Web Scenario Data Useage Problem

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • zab_monkey
    Member
    • Mar 2010
    • 37

    #1

    Web Scenario Data Useage Problem

    Hi guys,

    I have a series web scenarios that check a series of external sites. I have noticed that whenever something goes wrong with these sites, zabbix of course behave perfectly in alerting me. However, I have watched my netflows when these errors occur and there is a huge jump in data from the zabbix server hitting these sites both up and down stream. This continues, even after the sites come good again and doesnt abate until I 'recycle' the Web Scenario, at which point it returns to normal.

    Now these arent big sites, in fact to run a CURL or WGET from the zabbix server to one of these sites is no more that 6k worth of data returned.

    I suspect there may be something going on with these sites, but its odd that Zabbix continues to stream this inordinate amout of data even after the site is ok.

    By inordinate, I mean for a 300s Web Check with a single step hitting a web site who's GET is about 6k, its around 100MB and hour, which is just huge!

    I have checked the online documentation, and it doesnt define "precisely" what code or specific call is happeneing, tho I believe it to be CURL, I am just wondering if anyone else has seen this kind of behaviour or if someone can help me get a better understanding of how Zabbix is hitting the page so I can determine if my sites are just going stupid crazy.

    Thanks all,

    JC
  • zab_monkey
    Member
    • Mar 2010
    • 37

    #2
    Hello all,

    Ok, as suspected Zabbix was just doing what its supposed to do. In short no issue with Zabbix, as I didnt think there really would be. Having this confirmed would be great but I believe Zabbix performs something like a :

    curl -kL www.example.com

    or something like it?

    Essentially, the issue was that when the site went down, it redirected to an errorpage that kept redirecting on itself. Now where a browser (like Firefox) detects that the page will never complete and exhibits an error in the browser that says so, I expect Zabbix does not and just continues following where it is redirected due to a switch like the -L and therefore just keeps reidrecting and redirecting and redirecting and on and on and on.....

    As a result, we saw from the web logs that it was making a dozen odd calls per second because it kept follwing the redirect that would never end.... hence the huge spike in data....

    However, it is worth noting that when the site came good again, Zabbix kept using the same session, but I expect thats because the first one never changed since it kept trying as frequently as it did.

    Just assuming tho.
    Anyway, thanks for reading all that did, just thought this was interesting and would provide closure..... now I am off to hurt the web developer that did this!!

    JC

    Comment

    Working...