Post

Ansible Automation for Ubuntu Base setup

Ansible Automation for Ubuntu Base setup

Ansible Automation for Ubuntu Base setup

Ansible is a great configuration management tool that automates server creation using standard procedures reducing human errors. Ansible does not require special software to be installed on the nodes of the servers. It has all the tools necessary to write, build and facilitate the automation of scripts.

Prerequisites

  1. An Ansible Control Node
  2. Ansible Hosts

Installing Ansible

We need to set up a single control machine which we’ll use to execute our commands. Download Docs

1
2
3
4
sudo apt update
sudo apt install software-properties-common
sudo add-apt-repository --yes --update ppa:ansible/ansible
sudo apt install ansible

[!NOTE] On older Ubuntu distributions, “software-properties-common” is called “python-software-properties”. You may want to use apt-get rather than apt in older versions. Also, be aware that only newer distributions (that is, 18.04, 18.10, and later) have a -u or --update flag. Adjust your script as needed.

Inventory Setup

Ansible uses a simple inventory system to manage all hosts. This allows you to organize hosts into logical groups and negates the need to remember individual IP addresses or domain names. Want to run a command only on your staging servers? No problem. Pass the group name to the CLI command and Ansible will handle the rest.

The default location for the inventory file is /etc/ansible/hosts. However, we’re going to configure Ansible to use a different inventory file. Create a new plain text file called inventory  in the/srv/ansible.local,

1
2
mkdir /srv/ansible.local/
vi inventory

with the following contents:

1
2
3
[production] 
10.200.2.100 
10.200.2.50

The first line indicates the group name. The following lines are the servers. Multiple groups can be created using the [group name] syntax and hosts can belong to multiple groups.

You can add the server variable in the inventory file

1
2
3
[production:vars]
ansible_ssh_user=ubuntu
ansible_ssh_private_key_file=~/.ssh/id_rsa

Now we need to configure Ansible to tell it where our hosts file is located. Create a new file called ansible.cfg with the following contents.

1
2
3
[defaults]
inventory = inventory
host_key_checking = False

Let’s take a look at the ping module, which ensures we can connect to our hosts by returning a “pong” response if successful:

1
ansible production -m ping

Assuming everything is set up correctly, you should receive three success responses.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
root@gowdham:/srv/ansible.local$ ansible production -m ping
10.200.2.100 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}
10.200.2.50 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}

Ansible Playbooks

Playbooks allow you to chain commands together, essentially creating a blueprint or set of procedural instructions. Ansible will execute the playbook in sequence and ensure the state of each command is as desired before moving onto the next. If you cancel the playbook execution partway through and restart it later, only the commands that haven’t completed previously will execute. The rest will be skipped.

Organization

Let’s take a look at how our playbook is organized. The complete code is available in GitHub GitHub Link

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
├── ansible.cfg
├── inventory
├── playbook.yml
├── files
|   ├── etc
|   |   ├── apt/apt.conf.d
|   |   |   ├── 01proxy
|	|	|   └── 50unattended-upgrades
|	|   ├── docker
|   |   |   └── daemon.json
|	|	├── monit/conf.d
|   |   |   ├── 00-base
|   |   |   ├── iptables
|   |   |   ├── ssh_logins
|	|	|   └── system
|	|	├── sysctl.d
|   |   |   ├── 99-disable-ipv6.conf
|	|	|   └── 99-swap.conf
|	|	├── systemd
|   |   |   ├── system
|	|	|   |   └── apt-daily.timer.d
|	|	|   |       └── override.conf
|	|	|   └── timesyncd.conf.d
|	|	|       └── override.conf
|	|	└── zabbix
|   |   |   ├── zabbix_agentd.conf.d
|	|	|   |   └── 00-base.conf
|	|	|   └── zabbix_agentd.psk
|	├── root
|	└── usr/local/sbin
|	|	└──fw-off.sh
├── handlers
|   ├── monit.yml
|	├── sysctl.yml
|	├── systemctl.yml
|	└── zabbix-agent.yml
└── tasks
	├── disable-ipv6.yml
	├── docker.yml
	├── iptables.yml
	├── monit.yml
	├── unattended-upgrades.yml
	└── zabbix-agent.yml

Let’s take a look at the playbook.yml file.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
- hosts: production
  become: true
  
  # Variables 
  vars_prompt:
    - name: "host"
      prompt: "Select host (baremetal/lxc/proxmox-vm)"
      default: "baremetal"
      private: false

    - name: "docker"
      prompt: "Is docker needed? (yes/no)"
      default: "no"
      private: false
  tasks:
    # Default tasks
    - include: tasks/disable-ipv6.yml
    - include: tasks/iptables.yml
    - include: tasks/monit.yml

    # Host specific tasks
    - name: Install unattended-upgrades
      when: vars['host'] == 'lxc' or vars['host'] == 'proxmox-vm'
      include: tasks/unattended-upgrades.yml

    - name: Install docker
      when: vars['docker'] == 'yes'
      include: tasks/docker.yml  

  handlers:
    - include: handlers/sysctl.yml
    - include: handlers/systemctl.yml
    - include: handlers/monit.yml

Handlers

Handlers contain logic that should be performed after a module has finished executing, and they work very similarly to notifications or events. For example, when the Nginx configurations have changed, run service nginx reload. It’s important to note that these events are only fired when the module state has changed. If the configuration file didn’t require any updates, Nginx will not be reloaded.

Let’s take a look at the handler file:

1
2
3
4
5
- name: systemctl timesyncd restart
  shell: systemctl daemon-reload; systemctl restart systemd-timesyncd.service

- name: systemctl apt_daily_timer restart
  shell: systemctl daemon-reload; systemctl restart apt-daily.timer

Tasks

Tasks contain the actual instructions which are to be carried out by the role.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
# Install docker
- name: install docker
  apt:
    name: docker.io
    state: latest

# Install jquery
- name: install jquery
  apt:
    name: jq
    state: latest

# Install docker-compose
- name: install docker-compose
  shell: wget -q -O - https://api.github.com/repos/docker/compose/releases/latest | jq -r '.assets[] | select(.name=="docker-compose-linux-x86_64") | .browser_download_url' | sudo wget -i /dev/stdin -O /usr/local/bin/docker-compose && sudo chmod 755 /usr/local/bin/docker-compose && sudo ln -s docker-compose /usr/local/bin/d
  
# Set log limit
- name: set log limit
  copy:
    src: "files/etc/docker/daemon.json"
    dest: "/etc/docker/daemon.json"
    owner: root
    group: root
    mode: 0644
  notify:
    - docker restart

Running the Playbook

To run the playbook run the following command:

1
ansible-playbook playbook.yml

Reference

  1. How to Install Ansible and Automate Your Ubuntu 22.04 Server Setup spinupwp
  2. How to Use Ansible to Automate Initial Server Setup on Ubuntu 20.04 digitalocean
This post is licensed under CC BY 4.0 by the author.