In the 3.0 Release notes, I saw an interesting item (paraphrasing):
I find this ironic given that zabbix_agent was the only solution to a peculiar problem that has evolved within the last two years, and is coming down the collective pipes of sysadmins everywhere: The docker phenomenon.
To be more succinct, we are seeing an increase in the demand to host and monitor services (or service sets) which are completely unattached to their physical -- or even virtual -- servers. This presents a serious challenge to the Zabbix monitoring design -- and to most other monitoring service designs.
I ran into this a few years ago while managing a RedHat 5 cluster engine cluster. The services in particular were NFS mounts which would dynamically migrate from one host to another. Each service was tied to a service IP, and multiple services could be on the same host. There was no sensibly monitor these services as tied to the physical hosts, so I had to create Zabbix "hosts" which were at the service-IP and contained only the items relevant for the service being monitored. But this led to another problem: the zabbix agent was not too keen on being used to monitor IP addresses it didn't know about (maybe that was an old problem or my memory is faulty here). Certainly, active-checks on such a service could not work.
With Docker, the situation is much worse: miniature "containers" hold the service that is running, and said service is typically behind a NAT'd dynamically-created IP address. The typical Docker setup is a single process or process-tree and does not include services such as sshd, init, or inetd. However, the typical Docker setup is not very usable in productive, monitored environments. In such environments, the container needs to hold these other things. But you're still left with a dynamic IP, limiting the utility of running a zabbix agent daemon. By contrast, inetd would make more sense here. (It's also the mechanism of choice for the check-mk monitoring system).
This monitoring complexity is exacerbated by things like Kubernettes, Swarm, and other container-clustering technologies. Services will not only have random IPs, but random ports! A special discovery agent figures out where these services are running and is queried for redirecting traffic accordingly. (The right thing to do would be to finally properly extend DNS to announce services, like RPC did aeons ago. But I digress.) Any decent monitoring system will need to adopt to this scenario, especially w.r.t automatic service discovery.
I hope Zabbix can remain at the forefront and adapt to this new tech gracefully and intelligently.
zabbix_agent support will be dropped because apparently no one was using inetd.
To be more succinct, we are seeing an increase in the demand to host and monitor services (or service sets) which are completely unattached to their physical -- or even virtual -- servers. This presents a serious challenge to the Zabbix monitoring design -- and to most other monitoring service designs.
I ran into this a few years ago while managing a RedHat 5 cluster engine cluster. The services in particular were NFS mounts which would dynamically migrate from one host to another. Each service was tied to a service IP, and multiple services could be on the same host. There was no sensibly monitor these services as tied to the physical hosts, so I had to create Zabbix "hosts" which were at the service-IP and contained only the items relevant for the service being monitored. But this led to another problem: the zabbix agent was not too keen on being used to monitor IP addresses it didn't know about (maybe that was an old problem or my memory is faulty here). Certainly, active-checks on such a service could not work.
With Docker, the situation is much worse: miniature "containers" hold the service that is running, and said service is typically behind a NAT'd dynamically-created IP address. The typical Docker setup is a single process or process-tree and does not include services such as sshd, init, or inetd. However, the typical Docker setup is not very usable in productive, monitored environments. In such environments, the container needs to hold these other things. But you're still left with a dynamic IP, limiting the utility of running a zabbix agent daemon. By contrast, inetd would make more sense here. (It's also the mechanism of choice for the check-mk monitoring system).
This monitoring complexity is exacerbated by things like Kubernettes, Swarm, and other container-clustering technologies. Services will not only have random IPs, but random ports! A special discovery agent figures out where these services are running and is queried for redirecting traffic accordingly. (The right thing to do would be to finally properly extend DNS to announce services, like RPC did aeons ago. But I digress.) Any decent monitoring system will need to adopt to this scenario, especially w.r.t automatic service discovery.
I hope Zabbix can remain at the forefront and adapt to this new tech gracefully and intelligently.

n". On RHEL7 you can check did someone enabled this service using systemd commands. On other types of distributions such check can be done using other method. IMO key like system.service[] could be used on hiding such details making templates more portable on time scale and/or on moving between distributions.
Comment