Developing Ansible Playbooks for Arch Linux with Vagrant

I’m a big fan of automated configuration management software, and an even bigger fan of utilizing Vagrant for developing configuration modules/cookbooks/states/playbooks/whatever in a fast and easily reproducible environment. I previously created Puppet Sandbox for just this purpose, but have more recently taken an interest in using Ansible for configuration and orchestration.

I also have a long history of working with Arch Linux, and wanted to develop Ansible playbooks specifically for managing Arch machines. Vagrant supports automatically provisioning machines via Ansible out of the box, but there were still a couple of hurdles to get over:

  1. Up-to-date Vagrant base boxes for Arch are hard to find.
  2. Arch Linux doesn’t have Python 2 installed by default, which is a dependency for Ansible.

Packer Arch

To solve the first problem, I decided to create a generic Arch Linux base box myself. In the not too distant past, the way to do that in a repeatable fashion was Veewee, but the project has gotten progressively more complicated to set up and use. Lucky for me, there’s a new kid on the block for creating machine images named Packer, built and maintained by the author of Vagrant, Mitchell Hashimoto.

To make a long story short, I wrote Packer Arch, which is a bare bones Packer template and installation script that can be used to quickly generate Vagrant base boxes for Arch Linux. My goal with the box was to be as minimal as possible, and to roughly duplicate what you’d get when purchasing an Arch Linux VPS from a provider like DigitalOcean. Starting from that point, I wanted to configure everything else via Ansible.

Bootstrapping the Virtual Machine

Solving the Python 2 problem was a little trickier. Ansible itself provided a possible solution with their raw module, but Vagrant’s provisioning integration with Ansible requires Python 2 to be on the base box before you can run any playbooks. It’s the classic “chicken or the egg” problem.

Since utilizing Ansible for configuration as well as orchestration tasks was desirable, but would require having a proper setup outside of Vagrant anyway, I just decided to ignore Vagrant’s provisioner altogether. Instead, I wrote a short script to handle the one-time tasks so I could interact with the VM using Ansible in the exact same fashion as I would any other server.


The prerequisites for running the bootstrap script include assigning a known IP address to the machine via the Vagrantfile:

# -*- mode: ruby -*-
# vi: set ft=ruby :

# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| = "arch" :private_network, ip: ""

…recording that same IP in an inventory file named hosts for Ansible to reference:


…and finally pointing Ansible to the correct Python binary by creating a group_vars/all file containing:

# Variables listed here are applicable to all host groups.

ansible_python_interpreter: /usr/bin/python2

The Script

On top of installing Python 2, I have the bootstrap script handle a few other items for convenience:

  1. Create my user account and grant it full sudo privileges.
  2. Add my SSH public keys to the newly created account.
  3. Download a current package mirrorlist based on my geography.

The user management steps are handled by running the tasks tagged “bootstrap” from my regular master playbook, and the mirrorlist is downloaded and then transferred to the machine via the copy module.

Without further ado, here’s the bootstrap script:

#!/usr/bin/env bash

export ANSIBLE_HOSTS="${PWD}/hosts"

vagrant up
ansible vagrant -m raw -a 'pacman -Sy --noconfirm python2' --user=vagrant --private-key="${HOME}/.vagrant.d/insecure_private_key" --sudo
ansible-playbook site.yml --tags=bootstrap --user=vagrant --private-key="${HOME}/.vagrant.d/insecure_private_key" --sudo


if /usr/bin/curl --silent --fail --output mirrorlist "${URL}"; then
    case $OSTYPE in
            /usr/bin/sed -i '' 's/#Server/Server/g' mirrorlist
            /usr/bin/sed -i 's/#Server/Server/g' mirrorlist
    ansible vagrant -m copy -a 'src=mirrorlist dest=/etc/pacman.d/mirrorlist owner=root group=root mode=0644 backup=yes' --user=vagrant --private-key="${HOME}/.vagrant.d/insecure_private_key" --sudo
    rm mirrorlist

echo "export ANSIBLE_HOSTS=${PWD}/hosts"

Once the script runs, I paste the environment variable export lines that it echos into my shell. This makes Ansible purposefully ignore SSH host key checking; since the VM is transient, we don’t need to permanently store its key.

Achievement Unlocked!

From there on out, all of the Ansible modules work as expected and no longer require connecting as the vagrant user:

$ ansible vagrant -m ping | success >> {
    "changed": false,
    "ping": "pong"

To give you an idea of the actual playbooks I’m using with this setup, take a look at my Monarch project, and in particular, the users.yml file under the common role. As always, let me know if you need any help putting all of the pieces together.

— Aaron Bull Schaefer