Blog

1 May 2017

Keeping Provisioning and Deployment Simple

I think Docker, Kubernetes, Ansible, Chef, Puppet, et al are all too complicated.

The short history of Docker is baffling. Since the first release in 2013, the following technologies have been used, only some of which are still used by Docker: LXC, libcontainer, runc, AUFS, OverlayFS, Boot2Docker, Docker Machine, Docker Toolbox, Docker For Mac, Docker Compose, Docker Swarm. There's now Docker CE, Docker EE (with a whole one year of support!) and Moby.

Kubernetes is designed for a scale most people won't see.

There's over 90 bullet points about Ansible Playbooks, which is only one part of it.

There's 17 Concepts listed for Chef.

The point is, all these technologies are complicated and get out of date quickly, meaning significant investment and then maintenance is required.

I'll describe a simpler alternative here. The main general idea of all of the above is infrastructure-as-code. I believe a combination of shell scripting and a language of your choice (Python here) can provide simplicity and flexibility, without having to deal with the tools listed above.

Provisioning

Many hosting providers such as Vultr, OVH, Linode and AWS support APIs to start servers. It can be as simple as the following:


curl -H 'API-Key: EXAMPLE' https://api.vultr.com/v1/server/create --data 'DCID=1' --data 'VPSPLANID=1' --data 'OSID=127'

Or the equivalent Python:

resp = requests.post(
    'https://api.vultr.com/v1/server/create',
    headers={
        'API-Key': settings.VULTR_API_KEY,
    },
    data={
        'DCID': 1
        'VPSPLANID': 1
        'OSID': 127
    }
)

The next step is usually just installing some things. At this point I'll introduce a Python library called Spur, which allows you to run commands remotely over SSH, as well as locally using the same interface.

Here's an example to install a couple of things on a remote server:


shell = spur.SshShell(
    hostname="12.34.56.78",
    username="ubuntu",
)
shell.run(
    ["apt-get", "update"]
)
shell.run(
    ["apt-get", "-y", "install",
     "nginx", "python3", "python3-pip", "python3-venv"]
)

This can be integrated into your existing app, even going as far as adding a provision button on a dashboard like I did:

You might think that looks a bit ugly, and you'd be right. Having to separate args manually is annoying, but Python has a module called shlex in the standard library:


>>> import shlex
>>> print(shlex.split("apt-get -y install nginx python3 python3-pip python3-venv"))
['apt-get', '-y', 'install', 'nginx', 'python3', 'python3-pip', 'python3-venv']

Deployment

Web server infrastructure commonly has a proxy server, and multiple app servers, and I'll demonstrate how the proxy could be updated dynamically. This example doesn't contain the commands to actually deploy on each server, which varies massively between languages and projects.


nginx_tpl = """
upstream app_servers {
    %s;
}

server {
    listen 80;
    server_name example.com;

    location / {
        proxy_pass http://app_servers;
    }
}
"""

servers = ['a.a.a.a', 'b.b.b.b', 'c.c.c.c']
active_servers = list(servers)
proxy = "p.p.p.p"

proxy_shell = spur.SshShell(
    hostname=proxy,
    username="ubuntu",
)

def deploy_to_server(server):
    # removed for brevity

def update_proxy(active_servers):
    app_servers = "\n".join(["server %s;" % server for server in active_servers])
    nginx_conf = nginx_tpl % app_servers
    remote_path = "/etc/nginx/sites-enabled/proxy.conf"
    with proxy_shell.open(remote_path, 'wb') as remote_file:
        remote_file.write(nginx_conf)
    proxy_shell.run(['service', 'nginx', 'reload'])


for server in servers:
    active_servers.remove(sever)
    update_proxy(active_servers)
    deploy_to_server(server)
    active_servers.append(server)
    update_proxy(active_servers)

Permalink

Blog Archive

Keeping Provisioning and Deployment Simple