Monday, 16 July 2012

Be ready to scale

It makes sense to only use the resources you need to save on costs, and in my last post I talked about using only a single node for your entire stack, so for example your web server, web application and and database all running on one node.

Hopefully, sooner rather than later, your system will require to scale horizontally as you have exceeded all the resources available on a single node. To make the process as painless as possible you should use a configuration management tool that allows modularity. I personally prefer Puppet and it allows you to create modules with classes that can take parameters.

When setting up your single node you can create a module per piece of your system, so you could have a database module, and a web server module and a app server module etc.  You will probably also have modules for the software your system is build on, Nginx, PostgreSql for example. In order to distinguish the app stack modules I prefix them with the name of my system. So I will have

mysystem-db
mysystem-app
mysystem-web.

Each of these modules is essentially a class that can take parameters  such as database connection string's host names etc that then can be used in the module directly or via its template when generating configuration files. When you have one node most of the pertinent parameters will be pointing to the same host or localhost.  However once you want to scale out you can easily pass in the new database host for example.

Once you have these building block modules you can have different server roles that you puppet main site script can check and then apply the particular modules.  On my initial node I would have say a primary role that would just apply all the modules with parameters pointing to localhost.  Then one I want a separate database server for example I would have two roles, once webapp and one database. The webapp would apply the mysystem-app and mysystem-web and the database role would apply the mysystem-db.

The other advantage of having puppet being able to setup your single node from a vanilla OS is that you can recover from disaster much quicker than if you have set everything up manually.

You can see from my previous posts I prefer to use a puppetmasterless setup and have me push updates with git rather than the puppetmaster, handling it.  But this begs the question how does puppet know which node role to apply?  This is easily achieved using a few more advanced bits of puppet and a special module we build that discovers the role.  We seed the role with out bootstrap shell script which is passed as a parameter to it. I will reveal all in another post.

Feel free to leave comments or ask questions if anything it unclear.





Tuesday, 10 July 2012

Scale when you need to....

In my post one node I talked about a kind of scaling within a single node (a server), where you run multiple instances of you service and let a load balancer round robin the requests.  This is a sort of horizontal intra-node scaling that helps keep your service handling requests and if one of your instances dies then you can still rely on the other ones to keep serving.

If you found that the load was becoming to high you could launch more instances, but you will come to a point where this doesn't really help due to resource problems, such as CPU/memory or IO saturation.  It is tempting at this point to start to architect an elaborate multi-node setup ripping your application stack into layers. For example having 3 separate database nodes with replication and 4 app server nodes with auto scaling and a few load balancing nodes, maybe a few memcache nodes. However you will soon disappear down a rabbit hole of complexity and stress,  unless you have setup a similar infrastructure before.

Keeping everything on one node means all the parts of your application are as close as possible and keeps configuration to a minimum.  Scaling vertically by adding memory/cpu cores and better IO will keep you going and most cloud providers offer big fat nodes for you to grow into.  Of course this can only go so far and eventually you will have to spin up another node, but you shouldn't do this until you need to. 

There are usually a few arguments against running on a single node (assuming your service can run well on a single node).
One is having only one node puts you at risk of outages as you have no fallback if your server goes bye bye. Another is when it does come time to scale horizontally by adding more nodes you will have to pick apart you application stack, causing a delay in getting the load under control.

These arguments are valid however they can be easily overcome by making you configuration modular and being able to recreate your node from scratch in a few minutes as well as using a decent VPS service, that has good uptime guarantees. So you can keep things simple and scale as and when you really need to.

I will go into these in more detail in my next post.


Monday, 9 July 2012

What was that command again?

I get most of my work done via a command line interface, I use iTerm2 on my MacBook Air which lets me set up profiles run full screen and has many other useful features.

One very useful command when working on the terminal is the history command, it lists all the commands you have entered in that session numbered in the order you entered them, it also shows the time. Below is an example,


dev-macbook # history
....
460  20:04 > vagrant destroy
461  20:04 > vagrant up
462  20:06 > vagrant ssh
463  20:07 > vim node-init.sh 
464  20:08 > vagrant up
......

You can then repeat a command by typing its number with a bang (Exclamation mark) preceding it, it will then show the command in the prompt, but it won't run it, this is useful incase you accidentally enter the wrong number or need to tweak the command before executing it.

dev-macbook # !461
dev-macbook # vagrant up

To make it even easier to use I have this alias in my .bashrc file.

alias h='history | grep'

This means I can filter my history with the following easy command, showing the history containing the word git.

dev-macbook # h git
....
109  19:40 > vim .git/config 
111  19:41 > git pull
279  21:30 > cd .git/
289  21:45 > cd .git/
296  21:49 > vim .git/config 
316  21:51 > vim ../devops/puppet/.git/config 
......

Hope that helps, I will try and share some more tips along the way, but next I will return to the main topic.

Sunday, 8 July 2012

Puppet with no strings attached



In my previous post I talked about configuring a single node on a virtual machine using Vagrant with a Shell provisioner. The shell script which is run on the first boot to a vanilla OS, will install puppet and git then clone your puppet repository onto the node and run apply on it.  I won't go into the ins and outs of Puppet as they cover it really well in there documentation.

The standard setup of puppet makes use of a puppet server which all your nodes (as puppet clients) connect to, however it requires a bit of setup, and has issues which are covered in this excellent blog post. The blog post explains how to set up puppet using an empty git repo which you then push to via git.  Git has hooks that you can run code at on certain events. A post receive hook on the node will run after you have pushed to it.  I do this setup of the empty repo and the post receive hook via my base puppet module. Below is an example of a puppet script, this is run as part of the initial puppet apply described in my last post.

class my-base {

    user { "git":
      ensure => "present",
      home => "/var/git",
    }

    file { "/etc/sudoers.d/100-git":
        owner => root,
            group => root,
            mode => 440,
        content => "git ALL=(ALL) NOPASSWD:ALL",
        require => User["git"]
    }

    file {
      "/var/git": ensure => directory, owner => git, 
      require => User["git"];
      "/var/git/.ssh": ensure => directory, owner => git, 
      require => [User["git"], File["/var/git"]];
      "/var/git/puppet": ensure => directory, owner => git, 
      require => [User["git"], File["/var/git"]];
    }


    ssh_authorized_key { "git":
      ensure => present,
      key => "YOURKEY",
      user => git,
      name => "git@yourdomain.com",
      target => "/var/git/.ssh/authorized_keys",
      type => rsa,
      require => File["/var/git/.ssh"],
    }

    package { "git":
      ensure => installed,
    }

    exec { "createPuppetRepo":
      cwd => "/var/git/puppet",
      user => "git",
      command => "/usr/bin/git init --bare",
      creates => "/var/git/puppet/HEAD",
      require => [File["/var/git/puppet"], Package["git"], User["git"]],
    }

    $hook_puppet = "#!/bin/sh
     git archive --format=tar HEAD | (cd /etc/puppet && sudo tar xf -)
    sudo sh -c \"puppet apply /etc/puppet/manifests/site.pp >> /var/log/puppet/puppet.log 2>&1\"
    "

    file { "/var/git/puppet/hooks/post-receive": 
        ensure => present,
        content => $hook_puppet,
        group => git,
        mode => 755,
        owner => git,
        require => Exec["createPuppetRepo"]
    }

}

After this is applied I can push puppet updates from my local puppet repo using the command below, which adds my vagrant vm node as a remote and then pushes my changes. You would also want to push the changes to your hosted repo so new nodes get the latest version.

dev-macbook # git remote add mynode ssh://git@mynode/var/git/puppet
dev-macbook # git push mynode master

Once this is received it will apply the new puppet config vi the hook we setup.  I also create similar push git repos for application code (Python service code), in the post receive for that I restart the Supervisor daemon which manages my service instances, I will cover Supervisor in a later post.

Now we have the mechanisms to manage a node using the two commands below.

dev-macbook # vagrant up
....Some time later after updating puppet......
dev-macbook # git push mynode master

This setup can be used in most places, including Amazon's EC2, where the initial shell script would be passed as the user-data, you could do this via a boto python script. Also Linode has stackscripts which would allow you to run this script to get the node provisioned ready to go.

You will probably quickly grow out of having one node, or one type of node, for example you may have a database node and a application server node, and a load balancer node etc.  You can still use the mechanism above to achieve this with just an addition to the Shell script and a special Puppet module.  I will cover this next time.




Vagrant, Puppet and git

Vagrant, Puppet and git are powerful tools on their own, used together and you can have a fully functioning configured virtual box or multiple boxes with one command, and you can push changes to boxes with another command.

This is the command to create a node,

dev-macbook # vagrant up

This launches a virtual machine based on a base box (i.e. Ubuntu 12) via virtual box, vagrant uses a VagrantFile to configure the box, things like port forwarding, host name etc, however I prefer to leave all the configuration to puppet (which we will see later).

One very useful thing that  the VagrantFile can configure is a provisioner, it has support for Puppet, Chef and good ole Shell provisioners.  What this boils down to is just after the machine is created (the first time it boots) it calls the provisioner. I use the Shell provisioner as I want to use the same  mechansim for bootstrapping a node whether its with vagrant or with a cloud provider, and shell scripts are the most widely supported.  Another reason is some cloud providers allow limited length of a boot-strap script.

The bash script does a couple of things, it installs git and puppet then it sets up a ssh key for root to enable a git clone of a puppet repository automatically.  Bitbucket is a good choice for hosting your repository as it allows free unlimited private repos, with git ssh access.  Once the puppet repo is cloned it calls puppet apply on the site.pp.  I based my script on this excellent blog post, the part under "The Code", it is focused on AWS ec2 but it also works for Vagrant.

Puppet then takes over and builds your node, and I get  puppet to do a few things that allow pushing puppet changes to the node (or many nodes).  I will cover this in my next post.






Friday, 6 July 2012

One Node


To make life as easy as possible for your future self it is useful to think ahead about scenarios that might happen, and be ready to adapt quickly to them. If we assume that our system will start with a few hundred users then one instance of your system (a running process that responds to requests) should be able to handle the load.  The senario we need to plan for is when the system gets a sudden increase in users, then you probably need to fire up some more instances to share the load.

The first step is to have a few instances of you service running within one machine, and then place something in front of them to hand out the requests in some manner to allow the load to be shared. There are many options but if you're requests are HTTP/HTTPS then Nginx is excellent for this.

Making sure your service doesn't hold state between requests means you can run multiple instances on different ports, and let Nginx handle the load balancing between them, here is an example of the upstream section from a Nginx config.

upstream my_instances {
    server 127.0.0.1:5000;
    server 127.0.0.1:5001;
    server 127.0.0.1:5002;
  } 

location /my_service {  
    proxy_pass http://mongrel; 
} 

This tells Nginx to share the request between the 3 instances, running on the 3 ports (5000-5002). Any request to http://yourserver/my_service would proxy out to these three instances. This type of scaling can only go so far though. It depends on what your service does, and the actual resources you have on the node (cpus, memory , IO).  To scale further we need to add more nodes so we will have multiple Nginx nodes each talking to multiple instances of the services. Before we get to that we want to be able to easily build a node configured with everything we need, if possible we want this to be a via one command or better yet automatically.

Thursday, 5 July 2012

One man band

Is it possible for one person to run a company while it continues to grow? It all depends on the nature of business and products of course, but if we focus on a company offering web/mobile applications then I claim it can be longer than you think.

With clever use of current tools and services, a skilled developer willing to learn some new skills can manage and entire infrastructure that can serve millions of users.

I am currently  on this journey and although I won't talk about my company or it's products I wanted to share my experiences using cloud, dev-ops, automation and various tools and just see how far I can push it on my own.