Friday 6 July 2012

One Node


To make life as easy as possible for your future self it is useful to think ahead about scenarios that might happen, and be ready to adapt quickly to them. If we assume that our system will start with a few hundred users then one instance of your system (a running process that responds to requests) should be able to handle the load.  The senario we need to plan for is when the system gets a sudden increase in users, then you probably need to fire up some more instances to share the load.

The first step is to have a few instances of you service running within one machine, and then place something in front of them to hand out the requests in some manner to allow the load to be shared. There are many options but if you're requests are HTTP/HTTPS then Nginx is excellent for this.

Making sure your service doesn't hold state between requests means you can run multiple instances on different ports, and let Nginx handle the load balancing between them, here is an example of the upstream section from a Nginx config.

upstream my_instances {
    server 127.0.0.1:5000;
    server 127.0.0.1:5001;
    server 127.0.0.1:5002;
  } 

location /my_service {  
    proxy_pass http://mongrel; 
} 

This tells Nginx to share the request between the 3 instances, running on the 3 ports (5000-5002). Any request to http://yourserver/my_service would proxy out to these three instances. This type of scaling can only go so far though. It depends on what your service does, and the actual resources you have on the node (cpus, memory , IO).  To scale further we need to add more nodes so we will have multiple Nginx nodes each talking to multiple instances of the services. Before we get to that we want to be able to easily build a node configured with everything we need, if possible we want this to be a via one command or better yet automatically.

No comments:

Post a Comment