ZABBIX Forums  
  #1  
Old 07-11-2017, 16:51
jeroenAP jeroenAP is offline
Junior Member
 
Join Date: Nov 2017
Posts: 4
Default Clustered setup in AWS

Hi,

We are considering Zabbix for our infrastructure monitoring and I am wondering if the following setup is possible and maintainable:

I want to create an RDS instance (Aurora Multi-AZ) as DB, create an AutoScaling group set to 2 (or more) instances that will auto join/heal as instances die/get terminated and do the same for the frontend and proxies. for frontend and proxies I would also add an ALB.

What I've read so far about clustered/HA setups is that most of this should be achievable pretty easily, but I'm wondering about the Server Backend. What I've understood is that the setups are all focused on a static setup that will allow one server to die (that can then be restored to be returned to the cluster). I would like a setup where I can easily add and remove instances to this cluster and preferably share the load on them (possibly with an ALB in front).

Any ideas/suggestions?

Regards,
Jeroen
Attached Images
 
Reply With Quote
  #2  
Old 08-11-2017, 10:37
jan.garaj jan.garaj is offline
Senior Member
Zabbix certified specialist
 
Join Date: Jan 2010
Location: United Kingdom, Slovakia, Bulgaria
Posts: 472
Default

Your design is OK only for cloud-ready HTTP(S) application. That's only Zabbix frontend part. There is no problem with horizontal scaling and load balancing (because user sessions are stored in the DB).

Horizontal scaling of other parts is a naive idea in the Zabbix case (or you need to know how that kind of scaling affect Zabbix functionality). Also, ALB is wrong. Zabbix (agent/backend/proxy) doesn't use HTTP for communication. You will need ELB (TCP load balancing).

IMHO the better option for Zabbix on AWS (if you need only HA, not scalability) is ECS/Lambda.
__________________
Devops Monitoring Expert advice: Dockerize/automate/monitor all the things.
My DevOps stack: Docker / Kubernetes / Mesos / ECS / Terraform / Elasticsearch / Zabbix / Grafana / Puppet / Ansible / Vagrant
Reply With Quote
  #3  
Old 08-11-2017, 10:59
jeroenAP jeroenAP is offline
Junior Member
 
Join Date: Nov 2017
Posts: 4
Default

Hi Jan,

Thank you for your response. I indeed meant the Network Load Balancer for this setup. Wouldn't that work for the proxy? Can you explain why?

As for the scaling, for now I'd just use it for resiliency (have two nodes, if one dies AS will spin up a new one).

Can you also elaborate a bit more on how you would use ECS/Lambda?

Kind regards,
Jeroen
Reply With Quote
  #4  
Old 08-11-2017, 11:33
jan.garaj jan.garaj is offline
Senior Member
Zabbix certified specialist
 
Join Date: Jan 2010
Location: United Kingdom, Slovakia, Bulgaria
Posts: 472
Default

ECS:
- frontend service with multiple containers (because scaling is not a problem here)
- backend service with a single container (because scaling is a problem)
- proxy service with multiple containers should be fine

If any container is down, then ECS scheduler starts new container for that particular service automatically. ECS cluster itself can be autoscaled (that's just a bunch of EC2 instances).

Lambda: problem is when to deregister (register) a host; EC2 generates state change event, and Lambda will process it - it will deregister (register) host via Zabbix API - example https://github.com/cavaliercoder/ZabbixConference2017
That's useful if you have dynamic AWS environment (for example EMR clusters, which are running on demand, etc.).
__________________
Devops Monitoring Expert advice: Dockerize/automate/monitor all the things.
My DevOps stack: Docker / Kubernetes / Mesos / ECS / Terraform / Elasticsearch / Zabbix / Grafana / Puppet / Ansible / Vagrant
Reply With Quote
  #5  
Old 08-11-2017, 12:18
jeroenAP jeroenAP is offline
Junior Member
 
Join Date: Nov 2017
Posts: 4
Default

Okay, that sounds a bit like the setup I had in mind, but I like the containered approach!

Do you also make use of load balancers in here? Maybe Fabio?

We use a similar setup with a Lambda function for our Octopus Deploy setup, so thanks for the reference.

For the Zabbix server itself, you have a single docker image that contains the config? And when you want to update, you just create a new image and replace it? This setup makes sure that no data is lost then as well?

Regards,
Jeroen
Reply With Quote
  #6  
Old 08-11-2017, 12:59
jan.garaj jan.garaj is offline
Senior Member
Zabbix certified specialist
 
Join Date: Jan 2010
Location: United Kingdom, Slovakia, Bulgaria
Posts: 472
Default

I don't see reason, why you will need Fabio. ECS already integrates load balancing - I think it's only ALB, not ELB. Check doc pls.

Best practice for configuration of containers is using env variables (https://12factor.net/config). I don't see reason, why config file(s) should be a part of the image. Even TLS certificates (any files) can be passed as a base64 encoded string via env variables (I must have this approach in some my applications, because of Cloudfoundry). If you need to use config files, then Kubernetes concept of secrets is a better approach.
__________________
Devops Monitoring Expert advice: Dockerize/automate/monitor all the things.
My DevOps stack: Docker / Kubernetes / Mesos / ECS / Terraform / Elasticsearch / Zabbix / Grafana / Puppet / Ansible / Vagrant
Reply With Quote
  #7  
Old 08-11-2017, 14:07
jeroenAP jeroenAP is offline
Junior Member
 
Join Date: Nov 2017
Posts: 4
Default

Great stuff!! thank you for your help!
Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +2. The time now is 10:03.