Ansible — Automate Your Server Infrastructure with Code

Stop SSH-ing into servers to run the same commands manually. This hands-on guide shows how to use Ansible to provision, configure, and deploy to Hetzner cloud servers — with real playbooks you can copy and run.

If you manage even two servers, you've felt the pain: SSH in, run apt update, install packages, edit config files, restart services, repeat on the next server. It works until it doesn't — you forget a step, configs drift between machines, and "it works on server A but not server B" becomes your new debugging hobby.

Ansible solves this. You describe what your servers should look like in YAML files, run one command, and Ansible makes it happen — on one server or fifty. No agents to install, no master server to maintain. Just SSH and Python (which your servers already have).

This post walks through Ansible from zero to a working deployment pipeline, using Hetzner Cloud servers as our target infrastructure.


What Ansible Actually Does

Ansible is a configuration management and automation tool. You write declarative YAML files (called playbooks) that describe the desired state of your servers. Ansible connects via SSH, checks what's already in place, and only changes what needs changing.

ConceptWhat It IsExample
InventoryList of servers to manageYour Hetzner VPS IPs grouped by role
PlaybookYAML file describing desired state"Install Docker, copy configs, start containers"
TaskSingle action within a playbookapt: name=nginx state=present
RoleReusable bundle of tasks + files + templatesA "docker" role you apply to any server
ModuleBuilt-in function Ansible can executeapt, copy, docker_compose, ufw

The key insight: Ansible is idempotent. Running the same playbook twice produces the same result. If nginx is already installed, it skips that step. This means you can run your playbooks as often as you want without fear of breaking things.


Setting Up: Your Control Machine

Ansible runs from your local machine (or a CI server). It doesn't need anything installed on the target servers beyond SSH and Python.

Install Ansible

# Ubuntu/Debian
sudo apt update && sudo apt install -y ansible

# macOS
brew install ansible

# Via pip (any OS)
pip install ansible

Verify:

ansible --version
# ansible [core 2.16.x]

Project Structure

Create a directory for your infrastructure code:

infra/
├── inventory/
│   ├── hosts.yml          # Server list
│   └── group_vars/
│       └── all.yml        # Variables shared across all servers
├── playbooks/
│   ├── setup.yml          # Initial server setup
│   ├── deploy.yml         # Application deployment
│   └── security.yml       # Security hardening
├── roles/
│   ├── common/            # Base packages + config
│   ├── docker/            # Docker installation
│   └── app/               # Application-specific setup
└── ansible.cfg            # Ansible configuration

Step 1: Inventory — Define Your Servers

The inventory file tells Ansible which servers to manage and how to connect.

# inventory/hosts.yml
all:
  children:
    webservers:
      hosts:
        web01:
          ansible_host: 192.0.2.10
          ansible_user: root
        web02:
          ansible_host: 192.0.2.20
          ansible_user: root

    appservers:
      hosts:
        app01:
          ansible_host: 192.0.2.30
          ansible_user: root
        app02:
          ansible_host: 192.0.2.40
          ansible_user: root

Each server gets a friendly name. Groups (webservers, appservers) let you target subsets of your infrastructure.

Test connectivity:

ansible all -i inventory/hosts.yml -m ping
web01 | SUCCESS => { "ping": "pong" }
web02 | SUCCESS => { "ping": "pong" }
app01 | SUCCESS => { "ping": "pong" }
app02 | SUCCESS => { "ping": "pong" }

If this works, Ansible can reach your servers. Everything else builds on this.

Group Variables

Variables shared across servers go in group_vars/:

# inventory/group_vars/all.yml
timezone: Europe/Zurich
swap_size: 2G
docker_compose_version: "2.24.0"

# SSH hardening
ssh_port: 22
ssh_permit_root: "yes"  # Switch to "no" after creating deploy user

Step 2: Your First Playbook — Server Setup

A playbook is a list of tasks that run in order on your target servers. Let's start with the basics every server needs.

# playbooks/setup.yml
- name: Initial server setup
  hosts: all
  become: yes  # Run as root

  vars:
    packages:
      - curl
      - git
      - htop
      - ufw
      - fail2ban
      - unattended-upgrades

  tasks:
    - name: Update apt cache
      apt:
        update_cache: yes
        cache_valid_time: 3600

    - name: Upgrade all packages
      apt:
        upgrade: dist
        autoremove: yes

    - name: Install base packages
      apt:
        name: "{{ packages }}"
        state: present

    - name: Set timezone
      timezone:
        name: "{{ timezone }}"

    - name: Configure swap file
      block:
        - name: Check if swap exists
          stat:
            path: /swapfile
          register: swap_check

        - name: Create swap file
          command: "fallocate -l {{ swap_size }} /swapfile"
          when: not swap_check.stat.exists

        - name: Set swap permissions
          file:
            path: /swapfile
            mode: "0600"
          when: not swap_check.stat.exists

        - name: Make swap
          command: mkswap /swapfile
          when: not swap_check.stat.exists

        - name: Enable swap
          command: swapon /swapfile
          when: not swap_check.stat.exists

        - name: Add swap to fstab
          lineinfile:
            path: /etc/fstab
            line: "/swapfile none swap sw 0 0"
            state: present

Run it:

ansible-playbook -i inventory/hosts.yml playbooks/setup.yml

Ansible shows you exactly what it does:

TASK [Update apt cache] *****************************
ok: [web01]
ok: [web02]
changed: [app01]
changed: [app02]

TASK [Install base packages] ************************
ok: [web01]
changed: [web02]
ok: [app01]
changed: [app02]
  • ok = already in desired state, no changes made
  • changed = Ansible modified something to reach desired state

Step 3: Security Hardening Playbook

Security should never be an afterthought. This playbook configures UFW firewall and fail2ban:

# playbooks/security.yml
- name: Security hardening
  hosts: all
  become: yes

  tasks:
    - name: Configure UFW defaults
      ufw:
        direction: "{{ item.direction }}"
        policy: "{{ item.policy }}"
      loop:
        - { direction: incoming, policy: deny }
        - { direction: outgoing, policy: allow }

    - name: Allow SSH
      ufw:
        rule: allow
        port: "{{ ssh_port }}"
        proto: tcp

    - name: Allow HTTP
      ufw:
        rule: allow
        port: "80"
        proto: tcp

    - name: Allow HTTPS
      ufw:
        rule: allow
        port: "443"
        proto: tcp

    - name: Enable UFW
      ufw:
        state: enabled

    - name: Configure fail2ban
      copy:
        dest: /etc/fail2ban/jail.local
        content: |
          [DEFAULT]
          bantime = 3600
          findtime = 600
          maxretry = 5

          [sshd]
          enabled = true
          port = {{ ssh_port }}
          filter = sshd
          logpath = /var/log/auth.log
          maxretry = 3
      notify: Restart fail2ban

  handlers:
    - name: Restart fail2ban
      service:
        name: fail2ban
        state: restarted

Notice the handler pattern: the copy task triggers a restart only if the config actually changed. If fail2ban config is already correct, it won't restart the service.


Step 4: Roles — Reusable Building Blocks

When playbooks grow, you break them into roles. A role bundles tasks, templates, files, and variables into a reusable unit.

Creating a Docker Role

mkdir -p roles/docker/{tasks,handlers}
# roles/docker/tasks/main.yml
- name: Install Docker prerequisites
  apt:
    name:
      - apt-transport-https
      - ca-certificates
      - curl
      - gnupg
      - lsb-release
    state: present

- name: Add Docker GPG key
  apt_key:
    url: https://download.docker.com/linux/ubuntu/gpg
    state: present

- name: Add Docker repository
  apt_repository:
    repo: "deb https://download.docker.com/linux/ubuntu {{ ansible_distribution_release }} stable"
    state: present

- name: Install Docker
  apt:
    name:
      - docker-ce
      - docker-ce-cli
      - containerd.io
      - docker-compose-plugin
    state: present
    update_cache: yes

- name: Start Docker service
  service:
    name: docker
    state: started
    enabled: yes

- name: Add user to docker group
  user:
    name: "{{ ansible_user }}"
    groups: docker
    append: yes
# roles/docker/handlers/main.yml
- name: Restart Docker
  service:
    name: docker
    state: restarted

Now use it in a playbook:

# playbooks/setup-docker.yml
- name: Install Docker on all servers
  hosts: all
  become: yes
  roles:
    - docker

One line applies the entire Docker setup to every server. Need Docker on a new server? Add it to the inventory and re-run.


Step 5: Application Deployment Playbook

Here's where it gets practical. This playbook deploys a Docker Compose application — the pattern used across all our projects:

# playbooks/deploy.yml
- name: Deploy application
  hosts: "{{ target | default('all') }}"
  become: yes

  vars:
    app_dir: "/home/{{ app_name }}"
    repo_url: "{{ lookup('vars', app_name + '_repo') }}"

  vars_files:
    - "../inventory/group_vars/apps/{{ app_name }}.yml"

  tasks:
    - name: Clone or update repository
      git:
        repo: "{{ repo_url }}"
        dest: "{{ app_dir }}"
        version: "{{ branch | default('master') }}"
        force: yes
      register: git_result

    - name: Copy environment file
      template:
        src: "../templates/{{ app_name }}.env.j2"
        dest: "{{ app_dir }}/.env"
        mode: "0600"

    - name: Build and start containers
      community.docker.docker_compose_v2:
        project_src: "{{ app_dir }}"
        build: always
        state: present
      when: git_result.changed

    - name: Restart containers (if only config changed)
      community.docker.docker_compose_v2:
        project_src: "{{ app_dir }}"
        state: restarted
      when: not git_result.changed

    - name: Wait for application to be healthy
      uri:
        url: "http://localhost:{{ app_port }}/health"
        status_code: 200
      register: health_check
      retries: 10
      delay: 5
      until: health_check.status == 200

    - name: Print deployment status
      debug:
        msg: "{{ app_name }} deployed successfully on {{ inventory_hostname }}"

Deploy a specific app:

# Deploy the blog
ansible-playbook -i inventory/hosts.yml playbooks/deploy.yml \
  -e "app_name=blog target=blog"

# Deploy everything
ansible-playbook -i inventory/hosts.yml playbooks/deploy.yml \
  -e "app_name=blog"

Step 6: Templates — Dynamic Configuration

Ansible uses Jinja2 templates for dynamic files. Instead of manually editing .env files on each server, you template them:

# templates/blog.env.j2
POSTGRES_URL={{ db_host }}
POSTGRES_DB={{ db_name }}
POSTGRES_USER={{ db_user }}
POSTGRES_PASSWORD={{ db_password }}
REDIS_URL=redis://:{{ redis_password }}@{{ redis_host }}:6379/0
SECRET_KEY={{ secret_key }}
SERVER_ADDRESS=https://{{ domain }}

Variables come from your group_vars or host_vars:

# inventory/group_vars/apps/blog.yml
app_name: blog
app_port: 5000
domain: example.com
db_host: localhost
db_name: myapp_db
db_user: myapp_user

Secrets can be encrypted with Ansible Vault:

# Encrypt a file
ansible-vault encrypt inventory/group_vars/apps/blog.yml

# Run playbook with vault password
ansible-playbook -i inventory/hosts.yml playbooks/deploy.yml --ask-vault-pass

# Or use a password file
ansible-playbook -i inventory/hosts.yml playbooks/deploy.yml \
  --vault-password-file ~/.vault_pass

Step 7: Testing on Hetzner Cloud

Time to put this into practice. We'll provision a fresh Hetzner server and configure it entirely with Ansible.

Provision a Server via Hetzner CLI

# Install hcloud CLI
brew install hcloud  # macOS
# or: apt install hcloud-cli

# Set API token
hcloud context create myproject
# Enter your Hetzner API token

# Create a server
hcloud server create \
  --name ansible-test \
  --type cx22 \
  --image ubuntu-24.04 \
  --ssh-key my-key \
  --location fsn1

# Get the IP
hcloud server ip ansible-test
# → 49.13.xxx.xxx

Add to Inventory

# inventory/hosts.yml — add under a test group
    testservers:
      hosts:
        ansible-test:
          ansible_host: 49.13.xxx.xxx
          ansible_user: root

Run the Full Setup

# 1. Base setup
ansible-playbook -i inventory/hosts.yml playbooks/setup.yml \
  --limit ansible-test

# 2. Security hardening
ansible-playbook -i inventory/hosts.yml playbooks/security.yml \
  --limit ansible-test

# 3. Install Docker
ansible-playbook -i inventory/hosts.yml playbooks/setup-docker.yml \
  --limit ansible-test

# 4. Deploy an app
ansible-playbook -i inventory/hosts.yml playbooks/deploy.yml \
  --limit ansible-test \
  -e "app_name=blog"

The --limit flag targets only our test server. Once everything works, remove the limit and deploy to production.

Or Do It All in One Shot

Create a master playbook that imports the others:

# playbooks/full-setup.yml
- import_playbook: setup.yml
- import_playbook: security.yml
- import_playbook: setup-docker.yml
ansible-playbook -i inventory/hosts.yml playbooks/full-setup.yml \
  --limit ansible-test

One command. Clean server to fully configured in minutes.


Ansible vs. The Alternatives

ToolApproachProsCons
AnsibleAgentless, push-basedNo setup on servers, simple YAML, huge module librarySlower on large fleets (sequential SSH)
TerraformInfrastructure provisioningBest for creating cloud resources (VMs, networks, DNS)Not great for configuration management
Docker ComposeContainer orchestrationSimple, you're already using itDoesn't manage the host OS, no multi-server
Puppet/ChefAgent-based, pull-basedPowerful for large enterprisesRequires agent install, steeper learning curve

The sweet spot: Terraform to create the servers, Ansible to configure them, Docker Compose to run the apps. Each tool does what it's best at.


Practical Tips from Running Ansible in Production

1. Always use --check first (dry run):

ansible-playbook playbooks/deploy.yml --check --diff

Shows what would change without changing anything.

2. Tag your tasks for selective runs:

- name: Install nginx
  apt: name=nginx state=present
  tags: [nginx, webserver]
ansible-playbook playbooks/setup.yml --tags nginx

3. Use ansible.cfg to set defaults:

# ansible.cfg
[defaults]
inventory = inventory/hosts.yml
remote_user = root
host_key_checking = False
retry_files_enabled = False

[ssh_connection]
pipelining = True
ssh_args = -o ControlMaster=auto -o ControlPersist=60s

4. Debugging: When a task fails:

# Increase verbosity
ansible-playbook playbooks/setup.yml -vvv

# Run on a single host
ansible-playbook playbooks/setup.yml --limit web01

5. Ansible Galaxy — don't reinvent the wheel:

# Install community roles
ansible-galaxy install geerlingguy.docker
ansible-galaxy install geerlingguy.certbot

What We Covered

  • Inventory to define and group your servers
  • Playbooks to declare what your servers should look like
  • Roles to make your automation reusable
  • Templates for dynamic configuration with Jinja2
  • Vault for secret management
  • Hetzner Cloud as a practical testing ground

The code that configures your servers should be versioned, reviewed, and tested — just like application code. Ansible makes that possible with a minimal learning curve.

Next time you catch yourself running the same SSH commands on multiple servers, that's your signal: write a playbook instead.


Resources:

Ansible — Automate Your Server Infrastructure with Code | Software Engineer Blog