I was just reviewing the post linked below on server requirements for large enterprises today and was curious if there is any resources on how to properly partition MySQL for large Zabbix environments? We currently have an active proof of concept on a large scale using the appliances, but are moving to a more tailored CENTOS 7 install to conform with corporate standards and to upgrade to the 4.0 LTS version of Zabbix. Since this is the ideal time to partition the database before moving to the newer servers, I was hoping someone could direct me to some good documentation on it (note, I am not a DBA nor am I particularly skilled in MySQL administration. However, I can follow the concepts and implement recommendations without having the exact commands, if I need to
)
Realizing a lot of this depends on what we are collecting and how long we are keeping it, I will pass on this known information:
We currently are tracking about 75,000 items across ~1000 hosts (not saying all are useful - I inheirited this and haven't culled anything yet).
We are currently running at about 1200 VPS
There has been no formal declaration of historical data rentetion, so I am open to suggestions. I am thinking 3 months should be sufficient, but not sure.
We are running at least 5 proxy servers feeding in to the server to offload the collection process, but they are using almost none of the resources allotted, so I am thinking they need pared down.
Everything is running as VMs in a distributed VMware environment across several different clusters.
My current thinking to start with is designed like this:
1 Zabbix SQL Server - 2CPU, 16GB ram, both hot-add capable as we scale up. (How to partition for efficient use is the question)
1 Zabbix Application Server with Web Frontend - 1CPU, 4GB ram, both hot-add as we scale.
3+ Proxies - 1CPU, 4GB ram, each. All hot-add capable as needed.
These resource numbers come from matching to high water mark in vCenter resource utilization, so if there is something I am missing, I can still add resources as needed, that is fine.
)Realizing a lot of this depends on what we are collecting and how long we are keeping it, I will pass on this known information:
We currently are tracking about 75,000 items across ~1000 hosts (not saying all are useful - I inheirited this and haven't culled anything yet).
We are currently running at about 1200 VPS
There has been no formal declaration of historical data rentetion, so I am open to suggestions. I am thinking 3 months should be sufficient, but not sure.
We are running at least 5 proxy servers feeding in to the server to offload the collection process, but they are using almost none of the resources allotted, so I am thinking they need pared down.
Everything is running as VMs in a distributed VMware environment across several different clusters.
My current thinking to start with is designed like this:
1 Zabbix SQL Server - 2CPU, 16GB ram, both hot-add capable as we scale up. (How to partition for efficient use is the question)
1 Zabbix Application Server with Web Frontend - 1CPU, 4GB ram, both hot-add as we scale.
3+ Proxies - 1CPU, 4GB ram, each. All hot-add capable as needed.
These resource numbers come from matching to high water mark in vCenter resource utilization, so if there is something I am missing, I can still add resources as needed, that is fine.
Comment