zero downtime with ansible, digitalocean, spring-boot and fair

If you provide any service, be it web software or even gas distribution, 24/7 is an undeniable competitive asset.
For online software it translates to zero downtime.
Zero downtime is made possible thanks to one technique: load balancing. While load balancing was initially meant to distribute load it has also proven useful for failover and maintenance. It allows one instance under upgrade/maintenance to rely on other instances to provide the service.
While you can achieve the process manually for small apps, it’s not even conceivable if your application consists of many instance (many ranging from 3 to hundreds, even thousands).
Therefore the zero downtime (roll) process is almost always associated to automation.
The repetitive nature of automation makes is reliable: regardless the process trigger (manual red button or each SCM change) and frequency (once a year or several times a day) it should lead to the same result.
But it’s not all pink (never is in software) because load balanced application come with their own set of design constraints related to concurrency.
In this post we’ll assume that all design constraints are satisfied.
In this post I share one implementation of the zero downtime among many others. It is not rocket science by itself but some unexpected issues always pop up.

The goal

We want to deploy a new version of an application while the previous one is still running.
The application is written with Spring-boot, load balanced with fair, and runs on digitalocean cloud instances. The process is orchestrated by ansible.


I must admit, I’ve been lazy, I could have taken the opportunity to learn a new language/framework as rails or django would have been equally suitable for the process. I feel at ease with java and Spring-boot. The framework provides many operation ready features which is invaluable when you consider the time spent on designing/developping/configuring operation ready platforms.


Digitalocean is a linux boxes cloud provider. I chose a cloud provider because I wanted the process to be self contained and isolated. Also I don’t have that kind of power at home. I did not go for docker just because it hides all aspects related to ssh and authentication. It’s indeed not the philosophy behind docker. Docker makes my process implementation less close to the typology we deal with on a day to day basis. I did not like the idea. In addition, Docker does not play nicely with Ansible (it’s doable but not natural) when it comes to orchestrate many instances which should share a common config. Docker has its own tools (docker-compose) for doing so and I did not want to give up on ansible.
Last but not least: digitalocean is supported by Ansible, my orchestration tool.


It is the tool that makes it possible, orchestrates the process: creates nodes on digital ocean, installs software and runtimes, start stops services, etc. Don’t ask me why I chose ansible. I just love the tool and therefore I’m completely opiniated. Discovering ansible had the same impact as when I discovered Maven. It’s a conventional and natural way of managing nodes just by discribing our intent and not the low level implementation details.


I initially started with HAproxy and realized that HAproxy config is extremely sensitive to configuration issues. Also the load balancer will refuse to start if no member is up. I think it was not initially designed for such dynamic activity on the configuration, even with new ability to reload it without restarting it.
Fair came handy because it decouples the balancing in 2 roles: caroussel (which would be the equivalent of frontend in HAproxy) and transponder (which would be the equivalent of backend in HAproxy). The carrousel listens to incoming requests and transfers them to transponders. The very nice feature is that the transponders register/unregister themselves to the carrousel without impacting any other transponder. It’s not the other way around, it’s not the carrousel who includes/excludes member. I find it way more suitable for scripting purposes.

Enough theory now the practice.

Our goal is to run something similar to

ansible-playbook -i /etc/ansible/hosts -e "app_rev1=8e4accb app_rev2=54783ce" roll-update.yml

I guess the intent could not be clearer: ansible please upgrade my application from revision 8e4accb to revision 54783ce.
To do so ansible will execute the logic described in roll-update.yml:

spin-up nodes
install + configure software and runtimes
deploy app nodes in revision 1
test revision 1
deploy app nodes in revision 2
test revision 2

Let’s review the various steps. I only picked relevant snippets of the main playbook but if you are interested in the full playbook you can take a look at this repository:

Create Nodes
# Sequence:
# - create instances
# - register instances in groups
# - register new instances in the known hosts files for flawless authentication

- hosts: "localhost"
  connection: "local"
    ssh_known_hosts_file: "~/.ssh/known_hosts"

  - name: droplets | create
    digital_ocean: state="active" command="droplet" name="{{ item }}" size_id="62" region_id="2" image_id="13089493" ssh_key_ids="625455" wait_timeout="500" unique_name="yes"
    - "rolling-update-app-1"
    - "rolling-update-app-2"
    - "rolling-update-app-3"
    - "lb"
    register: droplets

  - name: droplets | register app IP
    add_host: name="{{ item.droplet.ip_address }}" groups="rolling-update-app"
    when: "'rolling-update-app' in"
    with_items: droplets.results

  - name: droplets | register lb IP
    add_host: name="{{ item.droplet.ip_address }}" groups="lb"
    when: "'lb' in"
    with_items: droplets.results

  - name: make sure the known hosts file exists
    file: path="{{ ssh_known_hosts_file }}" state="touch"

  - name: droplets | remove from known hosts
    shell: "ssh-keygen -R {{ item.droplet.ip_address }}"
    with_items: droplets.results

  - name: droplets | add to know hosts
    shell: "ssh-keyscan -H -T 10 {{ item.droplet.ip_address }} >> {{ ssh_known_hosts_file }}"
    with_items: droplets.results

The digital_ocean module will instruct ansible to connect to digialocean API and create an ubuntu 14.04 instance (13089493) of size 2MB (62) on Amsterdam data center (2). The authentication mechanism is ssh key with private key.
The process will wait until the instance is active or will fail in timeout after 500 s.
The instruction repeats for each item listed in the with_items section.
Once done, we need to register our instance for further reuse in the process (destroy them for example). That’s the purpose of the register keyword. It saves the outcome of the digital_ocean execution. The structure contains a lot of useful information among them the IP Address of each node.

From here we have 2 options for the rest of the process:

  • reference the structure and, each time we want to spot an instance’s properties, iterate through it => NOT recommended
  • use a much more powerful Ansible feature which will register the freshly created instances in the groups they belong to. This approach is favored because, as we’ll see later in this playbook, when we want to apply instructions to all host belonging to the same group then it’s just a matter of spotting them either with the add_host module, or the hostvars magic variable available out of the box

About idempotency I faced 2 issues:

  • The first time I re-ran the digital_ocean module I ended up with duplicate instances but with different IPs in my digitalocean dashboard. It is the default behavior. The unique_name property set to yes fixed the issue.
  • The second idempotency issue you might face is the fingerprint handling. When dropping an instance, the IP address remains available for a while to your account. It’s a digitalocean feature. While the IP address remains the same, the fingerprint does not. Therefore ssh complains about having same IP but with different fingerprint. This fails ansible authentication via SSH. You could disable strict SSH checking in Ansible but this is not recommended. Is chose to delete any stale information present in my known host file (ssh-keygen -R) and add them again (ssh-keyscan -H -T 10)

Finally my instance creation process was idempotent.

As you already guess, digital_ocean module will only work if you have a digitalocean account. It provides a client/key pair if you use API v1 or an access token if you use API v2. Digital Ocean supports both (for now) but at the time I started investigating the ansible module, only API v1 was supported by ansible.
The difference between the 2 API is that v2 is more REST oriented and more convenient. It allows to specify instances with their human readable properties instead of ids which is way more robust since the ids often change. For instance my build regularly fails because the image_id, which is currently associated to ubuntu 14.04 x64, will change next time digital ocean decide to support other types of images.

Delete nodes
# Sequence:
# - drop instances

- hosts: "localhost"
  connection: "local"

  - name:  droplets | delete
    digital_ocean: state="deleted" command="droplet" id="{{ }}" ssh_key_ids="625455" wait_timeout="500"
    with_items: droplets.results

We can notice that the same module is used to create and drop instances. The only relevant parameter is the state. While it needs to be active for the creation, it needs to be deleted for the deletion.
Create then drop is the basis for our self contained playbook, we can safely proceed.
Next step is to secure our nodes.

Secure nodes
# Sequence:
# - register sudo user to operate with the instances

- hosts: "rolling-update-app:lb"
  remote_user: "root"
  - {role: "security"}

With these instructions what did we just ask Ansible to do: Please ansible apply the security role to all hosts members of rolling-update-app group and lb group. If you followed, members of rolling-update-app group are rolling-update-app-* instances and members of lb group are lb instance. Therefore the security role will apply to all 4 nodes.
I could have used the “all” keyword. But then ansible would have try to apply the security to localhost as well because localhost is included by default in the inventory. Another way of including all hosts but localhost would be “hosts: !localhost”
But what’s in the security role ?

- name: create nodemanager user
  user: name="{{ node_management_user }}" shell=/bin/bash

- name: add authorized key to nodemanager
  authorized_key: user="{{ node_management_user }}" key="{{ lookup('file', '~/.ssh/') }}"

- name: add nodemanager to sudoers
  action: lineinfile dest=/etc/sudoers regexp="{{ node_management_user }} ALL" line="{{ node_management_user }} ALL=(ALL:ALL) NOPASSWD:ALL" state=present

I will not go through the details of a role structure but below is the main tasks executed by the security role. Basically I create a user, associate it to an ssh key which will be copied on the node and add the user to sudoers. Without ssh key method (by opposition to username/password) it’s completely impossible to automate. It’s really the next steps to have perfectly working before proceeding.
We’re all good, we can now proceed with installing and deploying applications

Build app
# Sequence:
# - clone to assess if deployment is required (check install revision against required one)

- hosts: "localhost"
  connection: "local"
  - {role: "clone-app", rev: "{{ app_rev1 }}" }

The clone-app role mostly executes the following steps on the control machine (localhost):

  • Test clone dir existence to delete it if need for idempotency reasons
  • Clone the application in the required revision thanks to the git module. In a real example you don’t need to specify the revision, it will assume the latest revision of the branch. If you specify a tag, it will pick the tag. If you specify a branch name it will pick the latest revision of the branch. To finish, you can specify the sha1
  • Build the application with maven hence the mvn clean package ….
  • Configure logging, application, tests (test config needs to be aware of the load balancer address and port) and unix services (need to be aware of spring-boot management port for clean stop).

Now that the application is built and the various configuration file properly interpolated, we can deploy on the nodes

Deploy and start apps nodes. Register them as cluster member
# Sequence:
# - deploy app if required
# - start app
# - register app as lb node

- hosts: "rolling-update-app"
  remote_user: "{{ node_management_user }}"
  sudo: "yes"
  - {role: "deploy-app", rev: "{{ app_rev1 }}"}
  - {role: "start-app", rev: "{{ app_rev1 }}"}
  - {role: "add-lb-node", rev: "{{ app_rev1 }}"}

In this section we mostly do the following on the apps nodes:

  • We are supposed to deploy the java application built just before by invoking the deploy-app role. So prior to any deployment task we need to install the java runtime. It is the purpose of the java role referenced by the deploy-app role (meta/main.yml file, dependencies section: it’s the convention for roles to reference dependencies in ansible). Then A unique path is created on the node. The executable and its configs are copied to the host. One valuable feature to note about the copy directive is its idempotency against the md5 sum. Unfortunately in java even if you build the same revision of the artifact the md5 sum changes, mostly due to build information (like timestamp) included in the artifact being built. It’s really sad because it breaks idempotency. Why is it so important ? Because java apps are usually fat and take time to transfer over the network, slowing considerably my process. It’s really annoying while my build could have avoided that transfer cost. Be warned.
  • Once deployed we can apply the start-app role. That role first kills any running instance of the application. Killing is needed because the recommended “service state=’stopped'” failed from time to time. It is also responsible for switching (unix symlinks mechanism) the active revision to the one being installed. To finish the unix scripts are copied to the host. Only then can we start the application as a service and wait for the port to be up.
  • Then we can apply the add-lb-node role. The role takes care of configuring the node balancing information: the load balancer to which it should register as member and other config specifics.
Install and start the load balancer
# Sequence:
# - install fair carrousel

- hosts: "lb"
  remote_user: "nodemanager"
  sudo: "yes"
  - {role: "lb"}

This section only installs fair package, configures it and starts it on the load balancer node. The configuration involves 2 important information: the members port to forward to and the balancer port to listen to.

Test the whole cluster
# Sequence:
# - test cluster

- hosts: "localhost"
  connection: "local"
  - {role: "test-app"}

The test was configured earlier to contact the lb host on the lb port. This role simply runs it on the manager node.

Roll update to revision 2
# Sequence:
# - clone to assess need deployment (check install revision against required one)
- hosts: "localhost"
  connection: "local"
  - {role: "clone-app", rev: "{{ app_rev2 }}"}

# Sequence:
# - unregister app as lb node
# - stop app
# - deploy new app revision
# - start app
# - register app as lb node

- hosts: "rolling-update-app"
  remote_user: "{{ node_management_user }}"
  sudo: "yes"
  - {role: "remove-lb-node"}
  - {role: "deploy-app", rev: "{{ app_rev2 }}"}
  - {role: "start-app", rev: "{{ app_rev2 }}"}
  - {role: "add-lb-node", rev: "{{ app_rev2 }}"}

# Sequence:
# - test cluster
- hosts: "localhost"
  connection: "local"
  - {role: "test-app"}

You might notice that the steps are the same than the build/deployment/start/test steps. It’s actually the same process except for one step: before installing the new application revision we need to remove it from the load balancer pool which is reflected by the remove-lb-node role. I really loved the fact that I was almost done when I created my cluster because the very same roles would be used for the rolling update process, only with a different revision of my application. You now understand that roles promote modularity in ansible.
You might also notice that here and there I left some sequences about “checking the revision on the target node”. This is because I wanted true idempotencey for my playbook. I wanted to avoid deploying and starting the application if it was already up and running in the required revision. I started and it turned out to be more complicated than expected … I need to check executable, configs, active revision and running state before safely skipping the steps. I find it a bit brittle. I will think about a smarter way to do it.

To conclude

I really appreciated implementing this process because it was really challenging and I really learned a few things:

  • How to use make better usage of local connections, local_action and delagate_to
  • How to target multiple groups in ansible
  • How to make good usage of roles composition via dependencies
  • How to enforce roles reusability and lisibility by explicitely bringing out the role’s parameter
  • How to transform a java process in a java unix service
  • How to configure fair components for load balancing
  • How to configure the digitalocean module for ansible

I implemented a much more complex process that involved database dump from “prod” to “test” env, database migration, elasticsearch migration just before migrating the instances but I found it too long and not relevant for the purpose of the post as each of the migration steps deserve a dedicated post (maybe some day when I feel courageous enough to share it).
Also this implementation uses IP addresses instead of domain names: resolving hosts is not the responsibility of this playbook.
To be fully dynamic, a system should implement the following features:

  • Abstract IP address from host aliases, so that the host alias never changes accross environments
  • Dynamically register/unregister new instances of a service via any Service discovery/registry technique. Consul would be my tool of choice but I need to be sure that it covers my use cases

I hope this post was helpful, specially if you go for ansible and spring-boot. To me, modern project template should somehow include the orchestration tool and infrastructure code in the testing process because it makes the shipment to production a breeze. Instead of having a local vision on the product we build, we should have a broader vision on the product we ship and the various steps needed from the commit to production. You have a new definition of done: when your artifact can smoothly be promoted throughout the environments pipeline.
Without even going with the immutable server strategy, which is almost 100% defect-free guarantee, it still brings quite a good level of confidence to continuously ship reliable products in production.


Ansible documentation
Ansible hyper reactive community
Spring-boot as a service
Dynamically remove fingerprints from known hosts
SSH Keys setup on Digital Ocean
Initial server setup


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s