Ad Widget

Collapse

failed to process an incoming connection Zabbix Agent + Kubernetes

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • FlashMaverick87
    Junior Member
    • Jan 2024
    • 3

    #1

    failed to process an incoming connection Zabbix Agent + Kubernetes

    Hi everyone,
    I installed Alma Linux 9 kubernets 1.29 and the related Zabbix Agent on an Alma Linux machine as per the official guide. The installation proceeds correctly but the Zabbix agent is unable to communicate correctly with the Zabbix server.
    Below is the internal network configuration:

    Code:
    Zabbix Server v6.0: 10.4.114.5
    K8s Master Node: 10.4.114.21
    Worker01 K8s node: 10.4.114.22
    The nodes speak correctly to each other and the deployments are carried out correctly.
    Only this error emerges from the agent logs:
    Code:
    09:19:11.153028 failed to process an incoming connection from 10.4.114.21: EOF
    .

    Here is the master pod configuration:

    Code:
    Name: zabbix-agent-rpf22
    Namespace: monitoring
    Priority: 0
    Service Account: zabbix-agent-service-account
    Node: k8s-master.flametest.it/10.4.114.21
    Start Time: Fri, 26 Jan 2024 17:36:10 +0100
    Labels: app=zabbix
    controller-revision-hash=5c7894fc6
    name=zabbix-agent
    pod-template-generation=11
    Annotations: <none>
    Status: Running
    IP: 10.4.114.21
    IPs:
    IP: 10.4.114.21
    Controlled By: DaemonSet/zabbix-agent
    Containers:
    zabbix-agent:
    Container ID: containerd://53a05973090fb35586ed1a24e3cba55dde26646951fd20ff52 aa54a9f32baa06
    Image: zabbix/zabbix-agent2:alpine-6.0.21
    Image ID: docker.io/zabbix/zabbix-agent2@sha256:92c1ac9da7fa121e2a1c44cc244d5331aa10 0a8d4f8624ed01e18d63902ccf17
    Port: 10050/TCP
    Host Port: 10050/TCP
    State: Running
    Started: Fri, 26 Jan 2024 17:36:11 +0100
    Ready: True
    Restart Count: 0
    Liveness: tcp-socket :10050 delay=0s timeout=3s period=10s #success=1 #failure=3
    Startup: tcp-socket :10050 delay=10s timeout=3s period=5s #success=1 #failure=5
    Environment:
    ZBX_HOSTNAME: (v1:spec.nodeName)
    ZBX_SERVER_HOST: 0.0.0.0/0
    ZBX_SERVER_PORT: 10051
    ZBX_PASSIVE_ALLOW: true
    ZBX_ACTIVE_ALLOW: false
    ZBX_DEBUGLEVEL: 4
    ZBX_TIMEOUT: 4
    Mounts:
    /hostfs/proc from proc (ro)
    /hostfs/root from root (ro)
    /hostfs/sys from sys (ro)
    Conditions:
    Type Status
    PodReadyToStartContainers True
    Initialized True
    Ready True
    ContainersReady True
    PodScheduled True
    Volumes:
    proc:
    Type: HostPath (bare host directory volume)
    Path: /proc
    HostPathType:
    sys:
    Type: HostPath (bare host directory volume)
    Path: /sys
    HostPathType:
    root:
    Type: HostPath (bare host directory volume)
    Path: /
    HostPathType:
    QoS Class: BestEffort
    Node Selectors: kubernetes.io/os=linux
    Tolerations: node-role.kubernetes.io/control-plane:NoSchedule
    node.kubernetes.io/disk-pressure:NoSchedule op=Exists
    node.kubernetes.io/memory-pressure:NoSchedule op=Exists
    node.kubernetes.io/network-unavailable:NoSchedule op=Exists
    node.kubernetes.io/not-ready:NoExecute op=Exists
    node.kubernetes.io/pid-pressure:NoSchedule op=Exists
    node.kubernetes.io/unreachable:NoExecute op=Exists
    node.kubernetes.io/unschedulable:NoSchedule op=Exists
    Events: <none>
    This means that the data does not arrive correctly on the zabbix server, showing me the error: Timeout while waiting for result.

    Could anyone suggest me what I have to check?

    Thank you all




    ​​
    Attached Files
  • ficzag
    Junior Member
    • Feb 2024
    • 2

    #2
    Hi!
    Did you solve your problem? I have the same issue with Tanzu Kubernetes and I cannot find the solution.

    Comment

    • ficzag
      Junior Member
      • Feb 2024
      • 2

      #3
      Seems like fixed in new version (7.0.0 beta2)

      Comment


      • fiftyfivee
        fiftyfivee commented
        Editing a comment
        Hi ficzag!

        I am currently getting an assignment to monitor Tanzu Kubernetes using Zabbix, did you manage to monitor tanzu and its clusters and pods?

        Thanks
    • Luiz Armando
      Junior Member
      • Apr 2024
      • 3

      #4
      Hello!

      Adjust the Kubernetes API address.
      From this 10.4.114.21:6443 to this https://10.4.114.21:6443
      Check your real address executing kubectl cluster-info.

      Comment

      • gcaselli
        Junior Member
        • Oct 2022
        • 6

        #5
        Hi,

        I have a K3s cluster with Zabbix Proxy and Agent2 deployed and working fine, but I see a lot of logs in the zabbix-agent2 pod:

        2024/06/18 14:03:19.545779 failed to process an incoming connection from 192.168.2.100: EOF
        2024/06/18 14:03:21.545999 failed to process an incoming connection from 192.168.2.100: EOF
        2024/06/18 14:03:23.549445 failed to process an incoming connection from 192.168.2.100: EOF
        2024/06/18 14:03:25.546585 failed to process an incoming connection from 192.168.2.100: EOF
        2024/06/18 14:03:27.546102 failed to process an incoming connection from 192.168.2.100: EOF
        2024/06/18 14:03:29.546137 failed to process an incoming connection from 192.168.2.100: EOF
        2024/06/18 14:03:31.545871 failed to process an incoming connection from 192.168.2.100: EOF
        2024/06/18 14:03:33.546541 failed to process an incoming connection from 192.168.2.100: EOF
        2024/06/18 14:03:35.545776 failed to process an incoming connection from 192.168.2.100: EOF
        2024/06/18 14:03:37.545778 failed to process an incoming connection from 192.168.2.100: EOF
        2024/06/18 14:03:39.545672 failed to process an incoming connection from 192.168.2.100: EOF
        2024/06/18 14:03:41.548880 failed to process an incoming connection from 192.168.2.100: EOF
        2024/06/18 14:03:43.546348 failed to process an incoming connection from 192.168.2.100: EOF
        ...and more

        Comunication with Zabbix Server and cluster monitoring is works fine. The agent is version 7.0.0.

        Have you been able to resolve this?

        Comment

        • matmz
          Junior Member
          • Mar 2021
          • 10

          #6
          I'm experiencing the exact same issue, receiving numerous errors like this: "Failed to process an incoming connection from XXX.XXX.XXX.XXX: EOF." Each agent logs this error, where the source IP is that of the node itself, which seems a bit odd.

          Have you found a solution to this problem?

          Comment

          • fe-h
            Junior Member
            • Feb 2025
            • 1

            #7
            This is due to the daemonset livenessProbe which does a "tcp port ping", by default, every 10 second.

            Comment

            Working...