
Let's write some playbooks
For getting the taste of Ansible, Here i try to write some Ansible playbooks for common security tasks. Here are the things that i think achievable:
Let's Go!
Our playbook can be a single file like this:
- name: General Linux security hardening
hosts: all
become: true
gather_facts: true
remote_user: userblahblah
tasks:
- name: Ensure apache is at the latest version
ansible.builtin.yum:
name: httpd
state: latest
- name: Write the apache config file
ansible.builtin.template:
src: /srv/httpd.j2
dest: /etc/httpd.conf
- name: Ensure postgresql is at the latest version
ansible.builtin.yum:
name: postgresql
state: latest
- name: Ensure that postgresql is started
ansible.builtin.service:
name: postgresql
state: started
But, without structuring your ansible tasks into seperate files, you will end up with long playbooks which is hard to maintain.
So you will be better off using import_tasks in tasks section:
- name: General Linux security hardening
hosts: all
become: true
gather_facts: true
remote_user: userblahblah
tasks:
- import_tasks: tasks/authorized_keys.yml
- import_tasks: tasks/disable_root_login.yml
handlers:
- import_tasks: handler/restart_ssh.yml
the address tasks/** is relative the playbook folder. tasks directory is where you define your tasks. (And they can be used in another playbooks as well)
You may wonder what is the field handlers doing here. Sometimes you want a task to run only when a change is made on a machine. For example, you may want to restart a service if a task updates the configuration of that service, but not if the configuration is unchanged. Ansible uses handlers to address this use case. Handlers are tasks that only run when notified.
So using this setup, we are going define our tasks in next sections:
Before disabling password login and doing anything like this, you need to put your public key in the remote servers so you can login using ssh key authentication method. Otherwise you will be locked out and something bad happens
Because this is so common, it has it's own dedicated simple module named ansible.posix.authorized_key. Let's create the file tasks/authorized_key:
---
- name: Copy local public key to destination server
ansible.posix.authorized_key:
user: tommy
state: present
key: "{{ lookup('file', lookup('env', 'HOME') + '/.ssh/id_ed25519.pub') }}"
we use the module ansible.posix.authorized_key to move our public key to remote server. fields are quite self explanatory. in the key field, you need to put your public key string. but instead you usually use the lookup plugin to use external text files as a value. in the lookup function call, we are trying to access our host's home directory using $HOME environment variable. and after that, we just concatenate that with /.ssh/id_ed25519.pub .
Define this task to your main playbook and run the playbook using the command ansible-playbook make sure it's doing it's job currectly. (Normally if the key is present in remote server, nothing should happen)
Writing a task for automatically updating the security patches through aptitude would be really helpful and time saving.
Writing ansible tasks for this purpose is fine, but there is also another option for applying security patches which is using the apt package named unattended upgrade. It fits better to the aptitude package manager system. And also you don't need to do anything manually.
Create the file tasks/apt_security.yml :
- name: Update all packages to their latest version
ansible.builtin.apt:
name: "*"
state: latest
update_cache: true
only_upgrade: true
We don't need to be too specific here. We just put the star as a package name and all packages will be upgraded to the latest patch version. The update_cache field will make sure the cache is updated before upgrade process.
It's needless to say, we are using Ubuntu LTS and the package major versions are frozen in the entire support time-span (for 5 years).
Servers that are reachable from the public internet, needs to be reinforced for dealing with bad actors such as hackers and automated robots.
Check your auth.log file for any suspicious activity.
You have some common options to configure:
root Loginfail2banAll approaches for human authentication rely on: Something You Know, Have, or Are. But the thing is, knowing something good enough like complex password is not feasible for most of us. Especially when you are dealing with multiple machines. So you better to leverage something that you can have. Something like ssh-keys. Even better, combine it with optional ssh password that is easy to remember.
Here we just define the first one. It's probably the easiest and most straight-forward things to do.
The built-in module ansible.builtin.lineinfile can help us here:
---
- name: Disabling the root login of ssh daemon
ansible.builtin.lineinfile:
path: /etc/ssh/sshd_config
state: present
regex: '^#?PermitRootLogin\s+.*'
line: 'PermitRootLogin no'
notify:
- ssh daemon reload
The module lineinfile need the path of the file, regex to match and the line to replace into the matched regex.
When modifying a line, the regex should typically match both the initial state of the line as well as its state after replacement by line to ensure idempotence.
At the end, we notify the ssh daemon reload to apply the new configuration. Here is the handler in handler/restart_ssh.yml :
---
- name: restart ssh service
ansible.builtin.service:
name: ssh.socket
state: restarted
Don't do anything rush about iptables otherwise you will lock-out yourself from connecting to your server. Try to test these using tools like iptables-apply to see if they work as expected.
You better to be strict about what packets are free to enter you server.
The common sense about the firewall strategies states that, you should allow only the packets that you are need and trust. You can’t have serious security if you use a default policy of ACCEPT and continuously insert rules to drop packets from sources that start to send bad stuff. You must allow only the packets that you trust, and deny everything else.
So, the firewalls are little more specific because it depends on the applications that you are running and hosting on your server.
Here I write some general ones in the playbook:
Iptable is still the dominant firewall in most Linux distros and because of that, We will focus on that here.
---
- name: flush all INPUT chain rules
ansible.builtin.iptables:
chain: INPUT
flush: yes
- name: Iptable accept icmp packets
ansible.builtin.iptables:
chain: INPUT
protocol: icmp
jump: ACCEPT
- name: Iptable accept tcp packet that are initiated from server
ansible.builtin.iptables:
chain: INPUT
protocol: tcp
syn: negate
jump: ACCEPT
- name: Iptable accept packets from localhost
ansible.builtin.iptables:
chain: INPUT
source: 127.0.0.1
jump: ACCEPT
- name: Iptable accept packets from local network
ansible.builtin.iptables:
chain: INPUT
source: 10.0.0.0/24
jump: ACCEPT
- name: Iptable accept tcp packets on port 22
ansible.builtin.iptables:
chain: INPUT
protocol: tcp
destination_ports:
- "22"
jump: ACCEPT
- name: Iptable accept dns packets
ansible.builtin.iptables:
chain: INPUT
protocol: udp
source_port: 53
jump: ACCEPT
- name: Iptable accept on http port
ansible.builtin.iptables:
chain: INPUT
protocol: tcp
destination_port: 80
jump: ACCEPT
- name: Iptable accept requests to k3s control node, Only from local network.
ansible.builtin.iptables:
chain: INPUT
source: 10.0.0.0/24
protocol: tcp
destination_port: 6443
jump: ACCEPT
- name: INPUT CHAIN Default Policy Drop
ansible.builtin.iptables:
chain: INPUT
policy: DROP
- name: Save current state of the firewall in file system
community.general.iptables_state:
state: saved
path: /etc/iptables/rules.v4
Things are quite self-explanatory. But the important thing is, you need to make sure the iptable is saved and preserved after reboot. This can be done via the package iptable-persistant . You just install the package and after that, the file /etc/iptabels/rules.v4 will be loaded every-time at the boot.
UFW can also be used as a more ergonomic front-end to iptables here. Its roles also does not need configuring to be persistent after reboot. It just works!
This is where Ansible gets complicated.
Changing the ssh default port is easy enough. You just use the lineinfile and change the Port parameter of /etc/sshd/sshd.config .
The issue is that Ansible uses SSH for its connections. How can we change the default SSH port and still maintain Ansible's SSH connection?
The problem is about making the role idempotent without having to fiddle with your inventory file too much.
Consider the following inventory:
[homelab]
10.0.0.30 ansible_port=2222
10.0.0.31 ansible_port=2222
10.0.0.32 ansible_port=2222
[cloud]
188.65.100.54 ansible_port=22
Let's say we want to change the ssh port of [cloud] hosts to 2222. When the playbook done its job, ansible ssh connection will be terminated and we need to change our inventory file manually.
Is that even possible to make this idempotent? Or i am just too much perfectionist?
It turn out there is a way to handle that:
---
# When we change the ssh port, ansible can not connect to the host anymore.
# So here we configure something to figure out what is the current port numeber
# and make changes based on that without manual modification to inventory.
# The variable "configured_port" is the port that needs to be configured!
# We make a copy of the variable "ansible_port". because changing that directly
# will mess with the ansible ssh connection.
- name: Set configured port fact
set_fact:
configured_port: "{{ ansible_port }}"
- name: Check if server is using the default SSH port
ansible.builtin.wait_for:
port: 22
state: started
host: "{{inventory_hostname}}"
timeout: 4
msg: The host is Not reachable on the default 22 port!
delay: 0
delegate_to: localhost
ignore_errors: "yes"
register: default_ssh
# If the "default_ssh" is set from above, continue the tasks with the default port set.
- name: Set inventory ansible_port to default
set_fact:
ansible_port: "22"
when: default_ssh is defined and
default_ssh.failed is false
register: ssh_port_set
# Only runs if the "default_ssh" is not defined.
- name: Check if we're using the inventory-provided SSH port
ansible.builtin.wait_for:
port: "{{ configured_port }}"
state: "started"
host: "{{ inventory_hostname }}"
msg: "The host is not reachable on the inventory specified port {{ansible_port}}"
timeout: 4
delay: 0
delegate_to: "localhost"
ignore_errors: "yes"
register: configured_ssh
when: default_ssh is defined and
default_ssh.state is undefined
# If {{ ansible_port }} is reachable, we don't need to do anything special.
- name: SSH port is configured properly
debug:
msg: "SSH port is configured properly"
when: configured_ssh is defined and
configured_ssh.state is defined and
configured_ssh.state == "started"
register: ssh_port_set
# At this point, the variable "ssh_port_set" should be defined and have some value.
# if not, it means it's not reachable with neither 22 or "ansible_port".
- name: Fail if SSH port was not auto-detected (unknown)
fail:
msg: "The SSH port is neither 22 or {{ ansible_port }}. Check the {{inventory_hostname}} manually!"
when: ssh_port_set is undefined
- name: Change SSH listen port to target port
ansible.builtin.lineinfile:
dest: "/etc/ssh/sshd_config"
regex: "^#?Port.*"
line: "Port {{configured_port}}"
notify:
- restart daemon
- restart ssh service
# You probably need this. otherwise the ssh restart handlers are not executed.
- name: Flush handlers to apply SSH changes
meta: flush_handlers
- name: Ensure we use the configured SSH port for the remainder of the role
set_fact:
ansible_port: "{{ configured_port }}"
# gather facts is better to be disabled during this playbook. because it's needs a ssh connection to the remote server.
- name: gather facts now, as now the server is ready in the configured port
ansible.builtin.setup:
First, change the variable ansible_port to the target port that you want to change. For example: 1822
I've been written a lot of comments in the playbook so you can make sense from what is happening.
Here is some tips:
ansible.builtin.waitfor for checking if the hosts are accessible on ports. if they are not reachable, the state of the waitfor module is undefined. We use this as a primary way to check this issue and act in systematic way.elegate_to field in waitfor module is necessary. We want to initiate this process from our localhost not the remote host!Do some research about how your linux distro is managing the ssh daemon and socket so you will make sure the changes are applied correctly. Things like Systemd are messing with everything nowdays...
As you seen, Ansible has a really important place for automation In DevOps tool era. It's performant and ergonomic for writing common tasks. But as a downside, it's tedious to write and test properly. And also playbooks can become extremely tricky for custom and uncommon tasks.