Ad Widget

Collapse

Logfile contains a large record: ...

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • tracert
    Junior Member
    • Jul 2014
    • 2

    #1

    Logfile contains a large record: ...

    I really searched hard for this but I just couldn't find a suggestion which points me in the right direction yet.

    I've some log files I need to monitor and check if there are some certain string values in it like 'ERROR' or 'FATAL'. Isn't a problem at all as long as those strings are found in the first 64 kB ... which brings me to my problem: I also need to get some triggers pulled if there is a certain string after 64 kB which gives me the following message in the agentd log:

    Code:
    Only the first 64 kB will be analyzed, the rest will be ignored while Zabbix agent is running
    At the moment I need to deal with record logs which sometimes hold more than ~512 kB per row ... I know what you're thinking -> you should optimize the stuff which is logged in the first place and I assure you that's something that will happen but right now I need to work with such scope of records.

    I've been thinking into the UserParameter option in the agentd config as well as setting up a cron job to executes a script which is parsing, lets say every 10min, the large record log which writes the info I need into a additional log which is then monitored by Zabbix.

    Any help is very appreciated!
  • jan.garaj
    Senior Member
    Zabbix Certified Specialist
    • Jan 2010
    • 506

    #2
    You can try to hack it in source code.
    Join the friendly and open Zabbix community on our forums and social media platforms.

    Line 1753 defines size of buffer BUF_SIZE (1807 is condition for your error). I can't guarantee functionality - you should to deep more into source code. :-)
    Devops Monitoring Expert advice: Dockerize/automate/monitor all the things.
    My DevOps stack: Docker / Kubernetes / Mesos / ECS / Terraform / Elasticsearch / Zabbix / Grafana / Puppet / Ansible / Vagrant

    Comment

    • tracert
      Junior Member
      • Jul 2014
      • 2

      #3
      thanks for your reply

      I figured that starting a cron job which parses the original log every 10min and let it write the necessary data to a separate log file is more convenient as there are a couple dozens of agents running JBoss AS & elasticsearch where I need to get information from the logfiles - and those are huge ... those agents connect to a proxy - once I enable the proxy in DM with items/triggers checking the logfiles my zabbix queue is filling up and all systems start to report unreachable

      I'll implement my cron/script solution today but I'm confident that the load will decrease massively

      Comment

      Working...