You are here

Ansible Basics

Here some notes on Ansible Basics to get started.

Here is the git repo with all the files that we work with:
https://github.com/likid0/AnsibleSeries.git

The inventory File.

A Txt File where we list the hosts we have that are going to be managed by ansible.

This is a static inventory that we populate with the host info required, so for example:

[liquid@liquid-ibm:ansible/ansible]$ cat inventory                                                                                                                                       (10-03 22:48)
[web]
web1 ansible_ssh_host=10.0.1.5  ansible_ssh_user=vagrant ansible_ssh_pass=vagrant 
web2 ansible_ssh_host=10.0.2.5  ansible_python_interpreter=/usr/bin/python 
[lb]
lb1 ansible_ssh_host=10.0.3.5   ansible_ssh_user=vagrant ansible_ssh_pass=vagrant
[db]
db1 ansible_ssh_host=10.0.4.5   

[dc1-madrid:children]
web
lb
[dc2-madrid:children]
db


On the first line:

web1 ansible_ssh_host=10.0.1.5  ansible_ssh_user=vagrant ansible_ssh_pass=vagrant

We are using ansible_ssh_host because we don't have any dns resolution, we also specify the ssh user and password(passwd in clear text err, no the best idea..).

web2 ansible_ssh_host=10.0.2.5  ansible_python_interpreter=/usr/bin/python

Here we are specifiying what python binary to use, this is helpfull with systems wit the bunary in non standard paths(hpux,etc) or for systems that have python3 installed as the default.

[dc1-madrid:children]
web
lb
[dc2-madrid:children]
db

Here we create a parent group that englobes other groups, in this case we have a data center1 group that contains the web and lb groups..


Lets try to run and ansible command and see if it works, we are going to use the ping module, that just checks the ansible python module can be run on the remote host.

[liquid@liquid-ibm:ansible/ansible]$ ansible web1 -i inventory -m ping                                                                                                                   (10-03 22:56)
web1 | FAILED! => {
    "failed": true, 
    "msg": "Using a SSH password instead of a key is not possible because Host Key checking is enabled and sshpass does not support this.  Please add this host's fingerprint to your known_hosts file to manage this host."
}

Ok, we have no host key to check so it doesn't work, lets to a ssh connection to get the key:

liquid@liquid-ibm:ansible/ansible]$ ssh vagrant@10.0.3.5                                                                                                                                (10-03 22:58)
The authenticity of host '10.0.3.5 (10.0.3.5)' can't be established.
ECDSA key fingerprint is SHA256:uaTeRUFySMmgcw3e0pxz74wRr6PjFNSYDJaKneWoeZ8.
ECDSA key fingerprint is MD5:8a:c5:02:40:3c:86:bf:3d:00:34:62:e8:1d:53:84:bf.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.0.3.5' (ECDSA) to the list of known hosts.

We now try again and it's working ok:

[liquid@liquid-ibm:ansible/ansible]$ ansible lb1 -i inventory -m ping                                                                                                                    (10-03 22:59)
lb1 | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}

We can disable the Host Key checking in the ansible config file:

[liquid@liquid-ibm:ansible/ansible]$ ls                                                                                                                                                  (10-03 22:59)
ansible.cfg  hosts  inventory
[liquid@liquid-ibm:ansible/ansible]$ cat ansible.cfg                                                                                                                                     (10-03 23:02)
[defaults]
host_key_checking = False

But we will see the ansible cfg file in a minute, lets continue with inventory and basics:

On the other hand we are going to copy our local pub key to the hosts so we don't need to have the passwd on the inventory file:

[liquid@liquid-ibm:vagrant/ansible]$ for i in web1 web2 lb1 db1                                                                                                                          (10-03 23:15)
for> do
for> cat ~/.ssh/id_rsa.pub | (vagrant ssh $i -c  "cat >> ~/.ssh/authorized_keys")
for> done

So we remove the password from our inventory:

[liquid@liquid-ibm:ansible/ansible]$ cat inventory                                                                                                                                       (10-03 23:18)
[web]
web1 ansible_ssh_host=10.0.1.5  ansible_ssh_user=vagrant 
web2 ansible_ssh_host=10.0.2.5  ansible_ssh_user=vagrant 
[lb]
lb1 ansible_ssh_host=10.0.3.5  ansible_ssh_user=vagrant
[db]
db1 ansible_ssh_host=10.0.4.5  ansible_ssh_user=vagrant 

[dc1-madrid:children]
web
lb
[dc2-madrid:children]
db


lets check it out:

[liquid@liquid-ibm:ansible/ansible]$ ansible all -i inventory -m ping                                                                                                                    (10-03 23:18)
lb1 | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}
db1 | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}
web2 | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}
web1 | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}

Ok, all working lets see if our parent group is working:

[liquid@liquid-ibm:ansible/ansible]$ ansible dc1-madrid -i inventory -m ping                                                                                                             (10-03 23:19)
lb1 | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}
web2 | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}
web1 | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}

Now it's a pain to have to put on all hosts the ansible_ssh_user=vagrant , so lets use group variables:

[liquid@liquid-ibm:ansible/ansible]$ cat inventory                                                                                                                                       (10-03 23:24)
[web]
web1 ansible_ssh_host=10.0.1.5  
web2 ansible_ssh_host=10.0.2.5  
[lb]
lb1 ansible_ssh_host=10.0.3.5  
[db]
db1 ansible_ssh_host=10.0.4.5  

[dc1-madrid:children]
web
lb
[dc2-madrid:children]
db

[dc-madrid:children]
dc1-madrid
dc2-madrid
[dc-madrid:vars]
ansible_ssh_user=vagrant

Here he have created another parent group called dc-madrid, that has 2 parent groups dc1-madrid dc2-madrid, then we add the ansible_ssh_user variable to the dc-madrid group. and remove it from the hosts. lets try:

[liquid@liquid-ibm:ansible/ansible]$ ansible dc-madrid -i inventory -m ping                                                                                                              (10-03 23:24)
lb1 | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}
web1 | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}
web2 | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}
db1 | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}

Ok, all working.., lets continue. Variables and scalling out with multiple files, when things get large, we need to divide our inventory into separate files, in this example we are going to use separate files for variables, groups and hosts, so for example:

[liquid@liquid-ibm:ansible/ansible]$ tree                                                                                                                                                (10-03 23:40)
.
├── ansible.cfg
├── group_vars
├── host_vars
└── inventory

2 directories, 2 files

We have 2 dirs group_vars and host_vars, I'm going to create a all file inside the group_vars, the variables in this file will apply to all groups, the vars files are yaml files:

[liquid@liquid-ibm:ansible/ansible]$ cat group_vars/all                                                                                                                                  (10-03 23:48)
---

#Username variable.
username: oper
[liquid@liquid-ibm:ansible/ansible]$ tree                                                                                                                                                (10-03 23:48)
.
├── ansible.cfg
├── group_vars
│   └── all
├── host_vars
└── inventory

2 directories, 3 files

Ok, so we have specified a variable, let's test it:

[liquid@liquid-ibm:ansible/ansible]$ ansible dc-madrid -i inventory -m user -a "name={{username}} password=oper"                                                                         (10-03 23:48)
web2 | FAILED! => {
    "changed": false, 
    "cmd": "/sbin/useradd -p VALUE_SPECIFIED_IN_NO_LOG_PARAMETER -m VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", 
    "failed": true, 
    "msg": "[Errno 13] Permission denied", 
    "rc": 13
}

Ok, we run the command as the vagrant user that doesn't have permissions to create users, we will use --become , that by default will use sudo:

[liquid@liquid-ibm:ansible/ansible]$ ansible dc-madrid -i inventory -m user -a "name={{username}} password=oper" --become                                                                (10-03 23:50)
lb1 | SUCCESS => {
    "changed": true, 
    "comment": "", 
    "createhome": true, 
    "group": 1002, 
    "home": "/home/********", 
    "name": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", 
    "password": "NOT_LOGGING_PASSWORD", 
    "shell": "", 
    "state": "present", 
    "system": false, 
    "uid": 1002
}
web1 | SUCCESS => {
    "changed": true, 
    "comment": "", 
    "createhome": true, 
    "group": 1001, 
    "home": "/home/********", 
    "name": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", 
    "password": "NOT_LOGGING_PASSWORD", 
    "shell": "/bin/bash", 
    "state": "present", 
    "system": false, 
    "uid": 1001

...............


Just to check it out:

[liquid@liquid-ibm:ansible/ansible]$ ansible dc-madrid -i inventory -a "cat /etc/passwd" | grep -E '(SUCCESS|oper)'                                                                      (10-03 23:52)
lb1 | SUCCESS | rc=0 >>
oper:x:1002:1002::/home/oper:
web2 | SUCCESS | rc=0 >>
oper:x:1001:1001::/home/oper:/bin/bash
web1 | SUCCESS | rc=0 >>
oper:x:1001:1001::/home/oper:/bin/bash
db1 | SUCCESS | rc=0 >>
oper:x:1001:1001::/home/oper:/bin/bash

We can have a variable file for each of our groups parent or child, just create one for the example:

[liquid@liquid-ibm:ansible/ansible]$ cat group_vars/web                                                                                                                                  (10-03 23:58)
---

#Username variable.
username: web
password: web
shell: /bin/bash
[liquid@liquid-ibm:ansible/ansible]$ tree                                                                                                                                                (10-03 23:58)
.
├── ansible.cfg
├── group_vars
│   ├── all
│   ├── db
│   ├── lb
│   └── web
├── host_vars
└── inventory

2 directories, 6 files

[liquid@liquid-ibm:ansible/ansible]$ ansible web -i inventory -m user -a "name={{username}} password={{password}} shell={{shell}}" --become                                              (10-03 23:58)
web2 | SUCCESS => {
    "append": false, 
    "changed": false, 
    "comment": "", 
    "group": 1002, 
    "home": "/home/********", 
    "move_home": false, 
    "name": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", 
    "password": "NOT_LOGGING_PASSWORD", 
    "shell": "/bin/bash", 
    "state": "present", 
    "uid": 1002
}
web1 | SUCCESS => {
    "append": false, 
    "changed": false, 
    "comment": "", 
    "group": 1002, 
    "home": "/home/********", 
    "move_home": false, 
    "name": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", 
    "password": "NOT_LOGGING_PASSWORD", 
    "shell": "/bin/bash", 
    "state": "present", 
    "uid": 1002
}
[liquid@liquid-ibm:ansible/ansible]$ ansible dc-madrid -i inventory -a "cat /etc/passwd" | grep -E '(SUCCESS|web)'                                                                       (10-03 23:59)
web1 | SUCCESS | rc=0 >>
web:x:1002:1002::/home/web:/bin/bash
web2 | SUCCESS | rc=0 >>
web:x:1002:1002::/home/web:/bin/bash
db1 | SUCCESS | rc=0 >>
lb1 | SUCCESS | rc=0 >>

Ok, the host_vars file takes precedence over the group and the all files, so if I create a web1 file and user the variable web1user it will take precedence:

[liquid@liquid-ibm:ansible/ansible]$ cat host_vars/web1                                                                                                                                  (10-04 00:03)
---

#Username variable.
username: web1user
[liquid@liquid-ibm:ansible/ansible]$ tree                                                                                                                                                (10-04 00:03)
.
├── ansible.cfg
├── group_vars
│   ├── all
│   ├── db
│   ├── lb
│   └── web
├── host_vars
│   └── web1
└── inventory

2 directories, 7 files

[liquid@liquid-ibm:ansible/ansible]$ ansible dc-madrid -i inventory -m user -a "name={{username}}" --become                                                                              (10-04 00:03)
web2 | SUCCESS => {
    "append": false, 
    "changed": false, 
    "comment": "", 
    "group": 1002, 
    "home": "/home/web", 
    "move_home": false, 
    "name": "web", 
    "shell": "/bin/bash", 
    "state": "present", 
    "uid": 1002
}
lb1 | SUCCESS => {
    "append": false, 
    "changed": false, 
    "comment": "", 
    "group": 1002, 
    "home": "/home/oper", 
    "move_home": false, 
    "name": "oper", 
    "shell": "", 
    "state": "present", 
    "uid": 1002
}
web1 | SUCCESS => {
    "changed": true, 
    "comment": "", 
    "createhome": true, 
    "group": 1003, 
    "home": "/home/web1user",     ---------------> here we can see how the host variable file takes precedence over the all o group files
    "name": "web1user", 
    "shell": "/bin/bash", 
    "state": "present", 
    "system": false, 
    "uid": 1003
}

Ok, so thats that for a quick start on the inventory, just some quick notes on the ansible configuration file:

the ansible.cfg file, it's looked in this order:

1. Shell variable location
2. ./ansible.cfg  in the current dir
3. ~/.ansible.cfg home dir of the user
4. /etc/defaults/ansible system configured file

Once it finds a config file it stops, it doesn't continue and merge options!.

You can allways use variables to override the config file, like this:

export ANSIBLE_CONFIGSETING=WHATEVER , for example export ANSIBLE_FORKS=10, lets check and example:

We have key host checkinf disabled on the ansible.cfg file:

[liquid@liquid-ibm:ansible/ansible]$ cat ansible.cfg                                                                                                                                     (10-04 01:03)
[defaults]
host_key_checking = False

No we export the variable, 

[liquid@liquid-ibm:ansible/ansible]$ ansible dc-madrid -i inventory -m ping                                                                                                              (10-04 01:06)
The authenticity of host '10.0.3.5 (10.0.3.5)' can't be established.
ECDSA key fingerprint is SHA256:uaTeRUFySMmgcw3e0pxz74wRr6PjFNSYDJaKneWoeZ8.
ECDSA key fingerprint is MD5:8a:c5:02:40:3c:86:bf:3d:00:34:62:e8:1d:53:84:bf.
Are you sure you want to continue connecting (yes/no)? The authenticity of host '10.0.1.5 (10.0.1.5)' can't be established.
ECDSA key fingerprint is SHA256:6t1RvzhGlBseZxV3ePGGILwnnSsyV9t54NP/2l2I3OA.
ECDSA key fingerprint is MD5:3b:f5:8c:95:e5:99:da:40:1f:87:38:34:93:aa:38:30.
Are you sure you want to continue connecting (yes/no)? The authenticity of host '10.0.2.5 (10.0.2.5)' can't be established.
ECDSA key fingerprint is SHA256:Pw3h/nY6a1bJIZ+q+yyirZexccWcqlgMRIJMJRJtWSg.

It is failing, if we set it to False again:

[liquid@liquid-ibm:ansible/ansible]$ export ANSIBLE_HOST_KEY_CHECKING=False                                                                                                              (10-04 01:06)
[liquid@liquid-ibm:ansible/ansible]$ ansible dc-madrid -i inventory -m ping                                                                                                              (10-04 01:06)
web2 | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}
db1 | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}
web1 | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}
lb1 | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}

All works ok!.

We are also going to set the location of our inventory file so we don't have to provide the -i inventory all the time:

[liquid@liquid-ibm:ansible/ansible]$ cat ansible.cfg                                                                                                                                     (10-04 15:31)
[defaults]
host_key_checking = False
hostfile = inventory


[liquid@liquid-ibm:ansible/ansible]$ ansible web1 -m ping                                                                                                                                (10-04 15:31)
web1 | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}

Fine, let's continue.


Ok, so on to modules, just a little bit of info, there are tons of modules out-there you have core modules supported by ansible, extra modules that come with the distribution but they are third-party or still not approved in core, and deprecated modules soon to be removed..

You can allways go to ansible web and check all the modules, but there is a command that can help when offline:

[liquid@liquid-ibm:~]$ ansible-doc -l 
a10_server                         Manage A10 Networks AX/SoftAX/Thunder/vThunder devices                                                                                                          
a10_service_group                  Manage A10 Networks devices' service groups                                                                                                                     
a10_virtual_server                 Manage A10 Networks devices' virtual servers                                                                                                                    
acl                                Sets and retrieves file ACL information.                                                                                                                        
add_host                           add a host (and alternatively a group) to the ansible-playbook in-memory inventory                                                                              
airbrake_deployment                Notify airbrake about app deployments                                                                                                                           
alternatives                       Manages alternative programs for common commands                                                                                                                
apache2_module                     enables/disables a module of the Apache2 webserver                                                                                                              
apk                                Manages apk packages                                                                                                                                            
apt                                Manages apt-packages            
..................

To get all the options for the module:

[liquid@liquid-ibm:~]$ ansible-doc apt                                                                                                                                                   (10-04 01:17)
> APT

  Manages `apt' packages (such as for Debian/Ubuntu).

Options (= is mandatory):

- allow_unauthenticated
        Ignore if packages cannot be authenticated. This is useful for bootstrapping environments that manage their own apt-key setup. (Choices: yes, no)
        [Default: no]

- autoremove
        If `yes', remove unused dependency packages for all module states except `build-dep'. (Choices: yes, no) [Default: False]

- cache_valid_time
        If `update_cache' is specified and the last run is less or equal than `cache_valid_time' seconds ago, the `update_cache' gets skipped. [Default: False]

And to get examples in playbooks:

[liquid@liquid-ibm:~]$ ansible-doc -s apt                                                                                                                                                (10-04 01:21)
- name: Manages apt-packages
  action: apt
      allow_unauthenticated   # Ignore if packages cannot be authenticated. This is useful for bootstrapping environments that manage their own apt-key setup.
      autoremove             # If `yes', remove unused dependency packages for all module states except `build-dep'.
      cache_valid_time       # If `update_cache' is specified and the last run is less or equal than `cache_valid_time' seconds ago, the `update_cache' gets skipped.
      deb                    # Path to a .deb package on the remote machine. If :// in the path, ansible will attempt to download deb before installing. (Version added 2.1)
      default_release        # Corresponds to the `-t' option for `apt' and sets pin priorities
      dpkg_options           # Add dpkg options to apt command. Defaults to '-o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold"' Options should be supplied as comma separated
                               list
      force                  # If `yes', force installs/removes.


So lets run some example modules from the adhoc commands, lets first install some software:

[liquid@liquid-ibm:ansible/ansible]$ ansible web -i inventory -m yum -a "name=httpd state=latest" --become                                                                               (10-04 10:09)
web1 | SUCCESS => {
.............
web2 | SUCCESS => {
............

If we check:

[liquid@liquid-ibm:ansible/ansible]$ ansible web -i inventory  -a "rpm -qa httpd" --become                                                                                               (10-04 10:11)
web1 | SUCCESS | rc=0 >>
httpd-2.4.6-40.el7.centos.4.x86_64

web2 | SUCCESS | rc=0 >>
httpd-2.4.6-40.el7.centos.4.x86_64

Now lets use the service module to get httpd running and enabled on boot:

[liquid@liquid-ibm:ansible/ansible]$ ansible web -i inventory -m service -a "name=httpd state=started enabled=yes" --become                                                              (10-04 10:13)
web2 | SUCCESS => {
    "changed": true, 
    "enabled": true, 
    "name": "httpd", 
    "state": "started"
}
web1 | SUCCESS => {
    "changed": true, 
    "enabled": true, 
    "name": "httpd", 
    "state": "started"
}

[liquid@liquid-ibm:ansible/ansible]$ curl http://192.168.123.11/ | grep -i title                                                                                                         (10-04 10:15)
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  4897  100  4897    0     0  5434k      0 --:--:-- --:--:-- --:--:-- 4782k
		Apache HTTP Server Test Page powered by CentOS

Ok, so the apache daemon is running ok.

We can also use host/group target patterns so for example:

web:db   ---> OR this will include the web and db group
web:&db  ---> OR/AND this will include only the hosts that are in both groups
!web     ---> NOT not in web group
db*      ----> Wildcards
~db[0-9]+  ----> Regex

You can also do complex patterns:

web:&production:!python3    --> so we will select all hosts in the web and production groups but that are not in the python3 group.

Quick example:

[liquid@liquid-ibm:ansible/ansible]$ ansible web:db:lb -i inventory  -a "uname -a" --become                                                                                              (10-04 10:25)
db1 | SUCCESS | rc=0 >>
Linux db1 3.10.0-327.28.3.el7.x86_64 #1 SMP Thu Aug 18 19:05:49 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

web2 | SUCCESS | rc=0 >>
Linux web2 3.10.0-327.28.3.el7.x86_64 #1 SMP Thu Aug 18 19:05:49 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

web1 | SUCCESS | rc=0 >>
Linux web1 3.10.0-327.28.3.el7.x86_64 #1 SMP Thu Aug 18 19:05:49 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

lb1 | SUCCESS | rc=0 >>
Linux lb1 3.13.0-43-generic #72-Ubuntu SMP Mon Dec 8 19:35:06 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux


Ok, lets check the setup module, to gather facts:

[liquid@liquid-ibm:ansible/ansible]$ ansible web1 -i inventory  -m setup                                                                                                                 (10-04 10:26)
web1 | SUCCESS => {
    "ansible_facts": {
        "ansible_all_ipv4_addresses": [
            "192.168.123.10", 
            "192.168.121.209", 
            "10.0.1.5"
        ], 
        "ansible_all_ipv6_addresses": [
            "fe80::5054:ff:fed6:7502", 
            "fe80::5054:ff:fe97:3af9", 
            "fe80::5054:ff:fe64:860a"
        ], 
        "ansible_architecture": "x86_64", 
        "ansible_bios_date": "04/01/2014", 
        "ansible_bios_version": "1.9.1-1.fc24", 
        "ansible_cmdline": {
            "BOOT_IMAGE": "/vmlinuz-3.10.0-327.28.3.el7.x86_64", 
            "biosdevname": "0", 
....................

We get loads of info, lets count the lines:

[liquid@liquid-ibm:ansible/ansible]$ ansible web1 -i inventory  -m setup | wc -l                                                                                                         (10-04 10:27)
363

Thats lot of info from our host, that we can use as variables, we can filter the output:

[liquid@liquid-ibm:ansible/ansible]$ ansible "web*" -i inventory  -m setup -a "filter=ansible_memfree_mb"                                                                                (10-04 10:30)
web2 | SUCCESS => {
    "ansible_facts": {
        "ansible_memfree_mb": 425
    }, 
    "changed": false
}
web1 | SUCCESS => {
    "ansible_facts": {
        "ansible_memfree_mb": 423
    }, 
    "changed": false
}

We can also gather all this information in a per file manner using the tree option:

[liquid@liquid-ibm:ansible/ansible]$ ansible all -i inventory -m setup --tree ./setup
...

[liquid@liquid-ibm:ansible/ansible]$ tree setup                                                                                                                                          (10-04 15:18)
setup
├── db1
├── lb1
├── web1
└── web2

0 directories, 4 files

[liquid@liquid-ibm:ansible/ansible]$ cat setup/db1                                                                                                                                       (10-04 15:18)
{"ansible_facts": {"ansible_all_ipv4_addresses": ["192.168.123.13", "192.168.121.108", "10.0.4.5"], "ansible_all_ipv6_addresses": ["fe80::5054:ff:fea5:42a7", "fe80::5054:ff:fea8:d7ca", "fe80::5054:ff:fe7d:cc5"], "ansible_architecture": "x86_64", "ansible_bios_date": "04/01/2014", "ansible_bios_version": "1.9.1-1.fc24", "ansible_cmdline": {"BOOT_IMAGE": "/vmlinuz-3.10.0-327.28.3.el7.x86_64", "biosdevname": "0", "console": "ttyS0,115200", "crashkernel": "auto", "net.ifnames": "0", "no_timer_check": true, "quiet": true, "rd.lvm.lv": "VolGroup00/LogVol01", "rhgb": true, "ro": true, "root": "/dev/mapper/VolGroup00-LogVol00"}, "ansible_date_time": {"date": "2016-10-04", "day": "04", "epoch": "1475587051", "hour": "13", "iso8601": "2016-10-04T13:17:31Z", "iso8601_basic": "20161004T131731440982", "iso8601_basic_short": "20161004T131731", "iso8601_micro": "2016-10-04T13:17:31.441099Z", "minute": "17", "month": "10", "second": "31", "time": "13:17:31", "tz": "UTC", "tz_offset": "+0000", "weekday": "Tuesday", "weekday_number": "2", "weeknumber": "40", "year": "2016"}, "ansible_default_ipv4": 

With this kind infomation you can easily create a detailed system inventory of all your systems, using for example programs like ansible-cmdb : https://github.com/fboender/ansible-cmdb


Playbooks, ok so now to a little intro on plays and playbooks, here we can see all the power ansible provides. Playbooks are written in yaml format, take care of white spaces!.

So we are going to put inside our playbook all the previous ad hoc commands we run in the examples:

[liquid@liquid-ibm:ansible/ansible]$ cat playbook.yaml                                                                                                                                   (10-05 00:04)
---     
- hosts: web
  become: yes
  tasks:
  - name: Update the OS to latest pkgs
    yum: name=* state=latest
  - name: Ensure HTTPD is installed
    yum:  name=httpd state=latest
  - name: Start HTTPD and enable on boot
    service: name=httpd enabled=yes state=started

- hosts: db
  become: yes
  tasks:
  - name: Update the OS to latest pkgs
    yum: name=* state=latest
  - name: Ensure Mysql is installed
    yum: name=mariadb-server state=latest
  - name: Start Mysql and enable on boot
    service: name=mariadb enabled=yes state=started

- hosts: dc-madrid:&ubuntu
  become: yes
  tasks:
  - name: Ensure Iptables is installed
    apt: name=iptables state=latest
  - name: Disable firewall ufw
    ufw: state=disabled policy=allow
- hosts: dc-madrid:&centos
  become: yes
  tasks:
  - name: Ensure Iptables is installed
    yum: name=iptables state=latest
  - name: Ensure iptables-services is installed
    yum: name=iptables-services state=latest
  - name: disable firewalld
    service: name=firewalld enabled=no state=stopped

A little explaining by block:

---
- hosts: web       --------> run play un web group
  become: yes      --------> become user(root) (it uses sudo by defaukt)
  tasks:           --------> Here we run the plays tasks
  - name: Update the OS to latest pkgs
    yum: name=* state=latest
  - name: Ensure HTTPD is installed
    yum:  name=httpd state=latest
  - name: Start HTTPD and enable on boot
    service: name=httpd enabled=yes state=started

We end up with a running apache server..

- hosts: db
  become: yes
  tasks:
  - name: Update the OS to latest pkgs
    yum: name=* state=latest
  - name: Ensure Mysql is installed
    yum: name=mariadb-server state=latest
  - name: Start Mysql and enable on boot
    service: name=mariadb enabled=yes state=started

This is just the same we end with a running mysql database


- hosts: dc-madrid:&ubuntu       ----> here I select all the hosts inside the dc-madrid group and that also belong to the ubuntu group
  become: yes
  tasks:
  - name: Ensure Iptables is installed
    apt: name=iptables state=latest   ---> I use the Apt module 
  - name: Disable firewall ufw
    ufw: state=disabled policy=allow ----> for the moment I want to disable the firewall in ubuntu
- hosts: dc-madrid:&centos     ---> all servers that belong to dc-madrid and centos groups
  become: yes
  tasks:
  - name: Ensure Iptables is installed
    yum: name=iptables state=latest
  - name: Ensure iptables-services is installed
    yum: name=iptables-services state=latest    ---> centos7 only has by default the firewalld service, we need the iptables-services pkg
  - name: disable firewalld
    service: name=firewalld enabled=no state=stopped

Ok, so lets run the playbook:

[liquid@liquid-ibm:ansible/ansible]$ ansible-playbook playbook.yaml                                                                                                                      (10-05 00:04)

PLAY [web] *********************************************************************

TASK [setup] *******************************************************************
ok: [web1]
ok: [web2]

TASK [Update the OS to latest pkgs] ********************************************
ok: [web2]
ok: [web1]

TASK [Ensure HTTPD is installed] ***********************************************
ok: [web1]
ok: [web2]

TASK [Start HTTPD and enable on boot] ******************************************
ok: [web1]
ok: [web2]

PLAY [db] **********************************************************************

TASK [setup] *******************************************************************
ok: [db1]

TASK [Update the OS to latest pkgs] ********************************************
ok: [db1]

TASK [Ensure Mysql is installed] ***********************************************
ok: [db1]

TASK [Start Mysql and enable on boot] ******************************************
ok: [db1]

PLAY [dc-madrid:&ubuntu] *******************************************************

TASK [setup] *******************************************************************
ok: [lb1]

TASK [Ensure Iptables is installed] ********************************************
ok: [lb1]

TASK [Disable firewall ufw] ****************************************************
ok: [lb1]

PLAY [dc-madrid:&centos] *******************************************************

TASK [setup] *******************************************************************
ok: [db1]
ok: [web1]
ok: [web2]

TASK [Ensure Iptables is installed] ********************************************
ok: [web1]
ok: [db1]
ok: [web2]

TASK [Ensure iptables-services is installed] ***********************************
ok: [web2]
ok: [web1]
ok: [db1]

TASK [disable firewalld] *******************************************************
ok: [web2]
ok: [db1]
ok: [web1]

PLAY RECAP *********************************************************************
db1                        : ok=8    changed=0    unreachable=0    failed=0   
lb1                        : ok=3    changed=0    unreachable=0    failed=0   
web1                       : ok=8    changed=0    unreachable=0    failed=0   
web2                       : ok=8    changed=0    unreachable=0    failed=0   


All went ok!,as you can see there is a TASK colled setup, that runs for each play:

TASK [setup] *******************************************************************
ok: [db1]
ok: [web1]
ok: [web2]

This is because gather facts is enabled by default, and before each play we gather the facts of all hosts involved in that play, this can be time consuming and can be disabled if we are not going to use any variables from the facts, so we add gather_facts: False to the play:

[liquid@liquid-ibm:ansible/ansible]$ cat playbook.yaml                                                                                                                                   (10-05 00:13)
---
- hosts: web
  become: yes
  gather_facts: False
  tasks:
  - name: Update the OS to latest pkgs
    yum: name=* state=latest
  - name: Ensure HTTPD is installed
    yum:  name=httpd state=latest
  - name: Start HTTPD and enable on boot
    service: name=httpd enabled=yes state=started

- hosts: db
  become: yes
  gather_facts: False
  tasks:
  - name: Update the OS to latest pkgs
    yum: name=* state=latest
  - name: Ensure Mysql is installed
    yum: name=mariadb-server state=latest
  - name: Start Mysql and enable on boot
    service: name=mariadb enabled=yes state=started

now lets run:

[liquid@liquid-ibm:ansible/ansible]$ ansible-playbook playbook.yaml                                                                                                                      (10-05 00:13)

PLAY [web] *********************************************************************

TASK [Update the OS to latest pkgs] ********************************************
ok: [web2]
ok: [web1]

TASK [Ensure HTTPD is installed] ***********************************************
ok: [web2]
ok: [web1]

TASK [Start HTTPD and enable on boot] ******************************************
ok: [web2]
ok: [web1]

PLAY [db] **********************************************************************

TASK [Update the OS to latest pkgs] ********************************************
ok: [db1]

TASK [Ensure Mysql is installed] ***********************************************
ok: [db1]

TASK [Start Mysql and enable on boot] ******************************************
ok: [db1]

PLAY [dc-madrid:&ubuntu] *******************************************************

TASK [setup] *******************************************************************
ok: [lb1]

There is no setup task anymore for the plays we used gather_facts: False . 

When one of plays fails on one host a task, it stops there and doesn't process the rest of the tasks for that host, if after failure we want to run the playbook but ONLY on the failed hosts, we can use the retry function:

First we add a path in the ansible cfg:

[liquid@liquid-ibm:ansible/ansible]$ cat ansible.cfg                                                                                                                                     (10-05 00:17)
[defaults]
host_key_checking = False
hostfile = inventory
retry_files_save_path = /home/liquid/vagrant/ansible/ansible

Then we run a playbook that fails:

[liquid@liquid-ibm:ansible/ansible]$ ansible-playbook playbook.yaml                                                                                                                      (10-05 00:17)

PLAY [web] *********************************************************************

TASK [Update the OS to latest pkgs] ********************************************
ok: [web2]
ok: [web1]

TASK [Ensure HTTPD is installed] ***********************************************
ok: [web2]
ok: [web1]

TASK [Start HTTPD and enable on boot] ******************************************
ok: [web2]
ok: [web1]

PLAY [db] **********************************************************************

TASK [Update the OS to latest pkgs] ********************************************
fatal: [db1]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh.", "unreachable": true}
	to retry, use: --limit @/home/liquid/vagrant/ansible/ansible/playbook.retry

PLAY RECAP *********************************************************************
db1                        : ok=0    changed=0    unreachable=1    failed=0   
web1                       : ok=3    changed=0    unreachable=0    failed=0   
web2                       : ok=3    changed=0    unreachable=0    failed=0   


We have the that db1 host is unreachable, once the host is up again we can run the playbook only on that host:

[liquid@liquid-ibm:ansible/ansible]$ ansible-playbook playbook.yaml --limit @/home/liquid/vagrant/ansible/ansible/playbook.retry                                                         (10-05 00:21)

PLAY [web] *********************************************************************
skipping: no hosts matched

PLAY [db] **********************************************************************

TASK [Update the OS to latest pkgs] ********************************************
ok: [db1]

TASK [Ensure Mysql is installed] ***********************************************
ok: [db1]

TASK [Start Mysql and enable on boot] ******************************************
ok: [db1]

PLAY [dc-madrid:&ubuntu] *******************************************************
skipping: no hosts matched

PLAY [dc-madrid:&centos] *******************************************************

TASK [setup] *******************************************************************
ok: [db1]

TASK [Ensure Iptables is installed] ********************************************
ok: [db1]

TASK [Ensure iptables-services is installed] ***********************************
ok: [db1]

TASK [disable firewalld] *******************************************************
ok: [db1]

PLAY RECAP *********************************************************************
db1                        : ok=7    changed=0    unreachable=0    failed=0   



Ok so lets continue, now lets take a look at the when clause, it controls that a task only executes when certain criterias is met. So for example

Remember we were using in our playbook groups to divide redhat and ubuntu/debian hosts, so we don't use a yum task on a debian system:

- hosts: dc-madrid:&ubuntu   {
    "ansible_facts": {
        "ansible_os_family": "RedHat"
    }, 
    "changed": false
}
lb1 | SUCCESS => {
    "ansible_facts": {
        "ansible_os_family": "Debian"
    }, 
    "changed": false
}

We can also use the ansible_distribution to see the Linux distro we are using, and use it in the when clause:

[liquid@liquid-ibm:ansible/ansible]$ ansible lb1:db1 -m setup -a "filter=ansible_distribution"                                                                                           (10-05 12:02)
db1 | SUCCESS => {
    "ansible_facts": {
        "ansible_distribution": "CentOS"
    }, 
    "changed": false
}
lb1 | SUCCESS => {
    "ansible_facts": {
        "ansible_distribution": "Ubuntu"
    }, 
    "changed": false
}


Ok so now in our playbook we are only going to have 1 play to ensure iptables is installed and disabled, remember that we need to have gather_facts enabled for this to work:

- hosts: dc-madrid
  become: yes
  tasks:
  - name: Ensure Iptables is installed
    apt: name=iptables state=latest
    when: ansible_os_family == "Debian"
  - name: Disable firewall ufw
    ufw: state=disabled policy=allow
    when: ansible_distribution == "Ubuntu"

  - name: Ensure Iptables is installed
    yum: name=iptables state=latest
    when: ansible_os_family == "RedHat"
  - name: Ensure iptables-services is installed
    yum: name=iptables-services state=latest
    when: ansible_os_family == "RedHat"
  - name: disable firewalld
    service: name=firewalld enabled=no state=stopped
    when: ansible_distribution == "CentOS"


Here you can see the output of the run of the play, all ok:

PLAY [dc-madrid] ***************************************************************

TASK [setup] *******************************************************************
ok: [db1]
ok: [web2]
ok: [web1]
ok: [lb1]

TASK [Ensure Iptables is installed] ********************************************
skipping: [web1]
skipping: [web2]
skipping: [db1]
ok: [lb1]

TASK [Disable firewall ufw] ****************************************************
skipping: [web1]
skipping: [web2]
skipping: [db1]
ok: [lb1]

TASK [Ensure Iptables is installed] ********************************************
skipping: [lb1]
ok: [db1]
ok: [web1]
ok: [web2]

TASK [Ensure iptables-services is installed] ***********************************
skipping: [lb1]
ok: [db1]
ok: [web1]
ok: [web2]

TASK [disable firewalld] *******************************************************
skipping: [lb1]
ok: [web1]
ok: [db1]
ok: [web2]

PLAY RECAP *********************************************************************
db1                        : ok=7    changed=0    unreachable=0    failed=0   
lb1                        : ok=3    changed=0    unreachable=0    failed=0   
web1                       : ok=7    changed=0    unreachable=0    failed=0   
web2                       : ok=7    changed=0    unreachable=0    failed=0  


OK, now lets use templates with the jinja2 format, templates can be used to modify certain parts of static configuration files

Lets continue with our playbook example and add a template to modify our web server config:

- hosts: web
  become: yes
  gather_facts: False
  vars:
     http_port: 80
     http_user: apache
     doc_dir: /ansible
     doc_root: /var/www/html/ansible
  tasks:
  - name: Update the OS to latest pkgs
    yum: name=* state=latest
  - name: Ensure HTTPD is installed
    yum:  name=httpd state=latest
  - name: Start HTTPD and enable on boot
    service: name=httpd enabled=yes state=started
  - name: Deploy config httpd
    template: src=templates/httpd.j2 dest=/etc/httpd/conf/httpd.conf  
    notify:
        - Restart Apache
  handlers:
        - name: Restart Apache
          service: name=httpd state=restarted

Let see:

  vars:
     http_port: 80
     http_user: apache
     doc_dir: /ansible
     doc_root: /var/www/html/ansible

These are the variables that are going to get modified in our config file template

  - name: Deploy config httpd
    template: src=templates/httpd.j2 dest=/etc/httpd/conf/httpd.conf

Here we put where our jinja2 template is located and where it should be stored in the remote hosts-

    notify:
        - Restart Apache
  handlers:
        - name: Restart Apache
          service: name=httpd state=restarted

Here we are going to use handlers, so if ansible detects when the task "Deploy config httpd" is run that there have been changes made
it will run the " Restart Apache" handler, handlers are only run once and allways at the end of the play.

 handlers:
        - name: Restart Apache
          service: name=httpd state=restarted

Here we only have 1 handler and it will use the service mode to restart httpd.


Ok, lets check our template templates/httpd.j2

[liquid@liquid-ibm:ansible/ansible]$ cat templates/httpd.j2                                                                                                                              (10-06 06:20)
ServerRoot "/etc/httpd"
Listen {{ http_port }}
Include conf.modules.d/*.conf
User {{ http_user }}
Group apache
#ServerName www.example.com:80

    AllowOverride none
    Require all denied

DocumentRoot {{ doc_root }}

    AllowOverride None
    # Allow open access:
    Require all granted

    Options Indexes FollowSymLinks
    AllowOverride None
    Require all granted

    DirectoryIndex index.html


ErrorLog "logs/error_log"

LogLevel warn



    Alias {{ doc_dir }} {{ doc_root }}
    ScriptAlias /cgi-bin/ "/var/www/cgi-bin/"


Here we can see in the {{}} brackets our variables,  ok let's run the playbook:

liquid@liquid-ibm:ansible/ansible]$ ansible-playbook playbook.yaml                                                                                                                      (10-06 06:28)

PLAY [web] *********************************************************************

TASK [Update the OS to latest pkgs] ********************************************
ok: [web1]
ok: [web2]

TASK [Ensure HTTPD is installed] ***********************************************
changed: [web1]
changed: [web2]

TASK [Start HTTPD and enable on boot] ******************************************
changed: [web1]
changed: [web2]

TASK [Deploy config httpd] *****************************************************
changed: [web1]
changed: [web2]

RUNNING HANDLER [Restart Apache] ***********************************************
changed: [web1]
changed: [web2]

...............


PLAY RECAP *********************************************************************
db1                        : ok=7    changed=0    unreachable=0    failed=0   
lb1                        : ok=3    changed=0    unreachable=0    failed=0   
web1                       : ok=9    changed=4    unreachable=0    failed=0   
web2                       : ok=9    changed=4    unreachable=0    failed=0   

lets test:

[liquid@liquid-ibm:ansible/ansible]$ curl 192.168.123.10 | grep title                                                                                                                    (10-06 06:31)
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  4897  100  4897    0     0  3611k      0 --:--:-- --:--:-- --:--:-- 4782k
		Apache HTTP Server Test Page powered by CentOS
[liquid@liquid-ibm:ansible/ansible]$ curl 192.168.123.11 | grep title                                                                                                                    (10-06 06:32)
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  4897  100  4897    0     0  1370k      0 --:--:-- --:--:-- --:--:-- 1594k
		Apache HTTP Server Test Page powered by CentOS


Ok, thats working, now let's add a customized index.html with info gathered from some facts:

first we modify our play and add a new task:

  - name: Deploy config httpd
    template: src=templates/httpd.j2 dest=/etc/httpd/conf/httpd.conf
    notify:
        - Restart Apache
  - name: Insert custom index.html in host
    template: src=templates/index.j2 dest={{doc_root}}/index.html
  handlers:
        - name: Restart Apache
          service: name=httpd state=restarted


WE have added:

  - name: Insert custom index.html in host
    template: src=templates/index.j2 dest={{doc_root}}/index.html

And ir our jinja2 template we add:

[liquid@liquid-ibm:ansible/ansible]$ cat templates/index.j2                                                                                                                              (10-06 06:59)

Ansible Test. Server IP: {{ansible_default_ipv4['address']}} hostname: {{ansible_hostname}} Distro: {{ansible_distribution}}

Ok lets run the playbook: TASK [Deploy config httpd] ***************************************************** ok: [web1] changed: [web2] TASK [Insert custom index.html in host] **************************************** changed: [web1] changed: [web2] RUNNING HANDLER [Restart Apache] *********************************************** changed: [web2] test: [liquid@liquid-ibm:ansible/ansible]$ curl 192.168.123.10 (10-06 07:01)

Ansible Test. Server IP: 192.168.121.209 hostname: web1 Distro: CentOS

[liquid@liquid-ibm:ansible/ansible]$ curl 192.168.123.11 (10-06 07:01)

Ansible Test. Server IP: 192.168.121.94 hostname: web2 Distro: CentOS

Ok, working let's take a quick look at roles.., we can create roles, these roles can contain tasks,variables,templates,handlers,etc and they have a dir structure we have to follow. So from our playbook, we are going to create 2 roles, webserver and dbserver. Lets start with webserver, I'm going to first create the dir structure: [liquid@liquid-ibm:ansible/ansible]$ mkdir roles (10-06 09:23) [liquid@liquid-ibm:ansible/ansible]$ mkdir roles/webserver (10-06 09:24) [liquid@liquid-ibm:ansible/ansible]$ mkdir roles/webserver/vars (10-06 09:24) [liquid@liquid-ibm:ansible/ansible]$ mkdir roles/webserver/handlers (10-06 09:24) [liquid@liquid-ibm:ansible/ansible]$ mkdir roles/webserver/tasks (10-06 09:24) [liquid@liquid-ibm:ansible/ansible]$ mkdir roles/webserver/templates (10-06 09:24) [liquid@liquid-ibm:ansible/ansible]$ tree roles (10-06 09:25) roles └── webserver ├── handlers ├── tasks ├── templates └── vars 5 directories, 0 files Now we are going to disasemble our playbook ,and put eache part of code in its dir: First vars: /home/liquid/vagrant/ansible/ansible/roles/webserver/vars [liquid@liquid-ibm:webserver/vars]$ cat main.yaml (10-06 09:29) --- http_port: 80 http_user: apache doc_dir: /ansible doc_root: /var/www/html/ansible Tasks.., as you can see we have removed the template/ from out template paths, because now ansible knows where to look for them [liquid@liquid-ibm:webserver/tasks]$ cat main.yaml (10-06 09:30) --- - name: Update the OS to latest pkgs yum: name=* state=latest - name: Ensure HTTPD is installed yum: name=httpd state=latest - name: Start HTTPD and enable on boot service: name=httpd enabled=yes state=started - name: Deploy config httpd template: src=httpd.j2 dest=/etc/httpd/conf/httpd.conf notify: - Restart Apache - name: Insert custom index.html in host template: src=index.j2 dest={{doc_root}}/index.html Handlers... [liquid@liquid-ibm:webserver/handlers]$ cat main.yaml (10-06 09:32) --- - name: Restart Apache service: name=httpd state=restarted For the templates we just copy them into the templates dir: [liquid@liquid-ibm:roles/webserver]$ cp ../../templates/*.j2 templates (10-06 09:33) [liquid@liquid-ibm:roles/webserver]$ ls templates (10-06 09:33) httpd.j2 index.j2 [liquid@liquid-ibm:ansible/roles]$ tree (10-06 09:34) . └── webserver ├── handlers │   └── main.yaml ├── tasks │   └── main.yaml ├── templates │   ├── httpd.j2 │   └── index.j2 └── vars └── main.yaml Ok, lets create a simple playbook to call this role: [liquid@liquid-ibm:ansible/ansible]$ cat webserver.yaml (10-06 09:38) --- - hosts: web become: yes gather_facts: yes roles: - webserver [liquid@liquid-ibm:ansible/ansible]$ ansible-playbook webserver.yaml (10-06 09:38) PLAY [web] ********************************************************************* TASK [setup] ******************************************************************* ok: [web1] ok: [web2] TASK [webserver : Update the OS to latest pkgs] ******************************** ok: [web1] ok: [web2] TASK [webserver : Ensure HTTPD is installed] *********************************** ok: [web1] ok: [web2] TASK [webserver : Start HTTPD and enable on boot] ****************************** ok: [web1] ok: [web2] TASK [webserver : Deploy config httpd] ***************************************** ok: [web1] ok: [web2] TASK [webserver : Insert custom index.html in host] **************************** ok: [web1] ok: [web2] PLAY RECAP ********************************************************************* web1 : ok=6 changed=0 unreachable=0 failed=0 web2 : ok=6 changed=0 unreachable=0 failed=0 Ok, lets create the dbserver role, this is a very simple role, we only have tasks: [liquid@liquid-ibm:ansible/ansible]$ mkdir roles/dbserver (10-06 09:38) [liquid@liquid-ibm:ansible/ansible]$ mkdir roles/dbserver/tasks (10-06 09:39) [liquid@liquid-ibm:ansible/ansible]$ mkdir roles/dbserver/ [liquid@liquid-ibm:ansible/ansible]$ cat roles/dbserver/tasks/main.yaml (10-06 09:42) - name: Update the OS to latest pkgs yum: name=* state=latest - name: Ensure Mysql is installed yum: name=mariadb-server state=latest - name: Start Mysql and enable on boot service: name=mariadb enabled=yes state=started [liquid@liquid-ibm:ansible/roles]$ tree (10-06 09:41) . ├── dbserver │   └── tasks │   └── main.yaml └── webserver ├── handlers │   └── main.yaml ├── tasks │   └── main.yaml ├── templates │   ├── httpd.j2 │   └── index.j2 └── vars └── main.yaml 7 directories, 6 files we create a basic playbook that calls the role and we run it: [liquid@liquid-ibm:ansible/ansible]$ ansible-playbook dbserver.yaml (10-06 10:04) PLAY [db] ********************************************************************** TASK [setup] ******************************************************************* ok: [db1] TASK [dbserver : Update the OS to latest pkgs] ********************************* ok: [db1] TASK [dbserver : Ensure Mysql is installed] ************************************ ok: [db1] TASK [dbserver : Start Mysql and enable on boot] ******************************* ok: [db1] PLAY RECAP ********************************************************************* db1 : ok=4 changed=0 unreachable=0 failed=0 I'm going to create a final role, called server_basic, with basic config for all servers, install but disable iptables, disable selinux, config ntp: first the tree: [liquid@liquid-ibm:ansible/ansible]$ mkdir -p roles/server_basic/tasks (10-06 10:23) [liquid@liquid-ibm:ansible/ansible]$ mkdir -p roles/server_basic/handlers (10-06 10:24) [liquid@liquid-ibm:ansible/ansible]$ mkdir -p roles/server_basic/templates (10-06 10:24) [liquid@liquid-ibm:ansible/ansible]$ mkdir -p roles/server_basic/vars (10-06 10:24) [liquid@liquid-ibm:ansible/roles]$ tree server_basic (10-06 13:30) server_basic ├── handlers │   └── main.yaml ├── tasks │   ├── iptables.yaml │   ├── main.yaml │   ├── ntp.yaml │   └── selinux.yaml ├── templates │   └── ntp.j2 └── vars └── main.yaml 4 directories, 7 files [liquid@liquid-ibm:ansible/roles]$ cat server_basic/handlers/main.yaml (10-06 13:31) --- - name: NTP restart service: name=ntpd state=restarted [liquid@liquid-ibm:ansible/roles]$ cat server_basic/tasks/main.yaml (10-06 13:31) --- - include: iptables.yaml - include: ntp.yaml - include: selinux.yaml [liquid@liquid-ibm:ansible/roles]$ cat server_basic/tasks/iptables.yaml (10-06 13:31) --- - name: Ensure Iptables is installed apt: name=iptables state=latest when: ansible_os_family == "Debian" - name: Disable firewall ufw ufw: state=disabled policy=allow when: ansible_distribution == "Ubuntu" - name: Ensure Iptables is installed yum: name=iptables state=latest when: ansible_os_family == "RedHat" - name: Ensure iptables-services is installed yum: name=iptables-services state=latest when: ansible_os_family == "RedHat" - name: disable firewalld service: name=firewalld enabled=no state=stopped when: ansible_distribution == "CentOS" [liquid@liquid-ibm:ansible/roles]$ cat server_basic/tasks/ntp.yaml (10-06 13:31) --- - name: Check if ntp is installed and updated yum: name=ntp state=latest - name: Configure the ntp daemon with a template template: src=ntp.j2 dest=/etc/ntp/ntp.conf notify: - NTP restart [liquid@liquid-ibm:ansible/roles]$ cat server_basic/tasks/selinux.yaml (10-06 13:32) --- - name: disable selinux selinux: policy=targeted state=permissive when: ansible_os_family == "RedHat" [liquid@liquid-ibm:ansible/roles]$ cat server_basic/templates/ntp.j2 (10-06 13:32) driftfile /var/lib/ntp/drift restrict default nomodify notrap nopeer noquery restrict 127.0.0.1 restrict ::1 server {{ ntp1server }} server {{ ntp2server }} includefile /etc/ntp/crypto/pw keys /etc/ntp/keys [liquid@liquid-ibm:ansible/roles]$ cat server_basic/vars/main.yaml (10-06 13:32) --- ntp1server: hora.rediris.es ntp2server: 0.centos.pool.ntp.org Ok, so let's create a site.yaml file, were we are going to have all our roles/plays run: [liquid@liquid-ibm:ansible/ansible]$ cat site.yaml (10-06 13:33) --- - hosts: web become: yes gather_facts: yes roles: - server_basic - webserver tags: - web - hosts: db become: yes gather_facts: yes roles: - server_basic - dbserver tags: - db You can see I have used tags, I can invoke those tags if I only want to run certain plays and don't do the full run. Anyhow here is the full run output of ansible, with all our roles defined in new clean systems: [liquid@liquid-ibm:ansible/ansible]$ ansible-playbook site.yaml (10-06 13:30) PLAY [web] ********************************************************************* TASK [setup] ******************************************************************* ok: [web1] ok: [web2] TASK [server_basic : Ensure Iptables is installed] ***************************** skipping: [web1] skipping: [web2] TASK [server_basic : Disable firewall ufw] ************************************* skipping: [web1] skipping: [web2] TASK [server_basic : Ensure Iptables is installed] ***************************** ok: [web1] ok: [web2] TASK [server_basic : Ensure iptables-services is installed] ******************** changed: [web1] changed: [web2] TASK [server_basic : disable firewalld] **************************************** ok: [web2] ok: [web1] TASK [server_basic : Check if ntp is installed and updated] ******************** changed: [web2] changed: [web1] TASK [server_basic : Configure the ntp daemon with a template] ***************** changed: [web2] changed: [web1] TASK [server_basic : disable selinux] ****************************************** changed: [web2] changed: [web1] TASK [webserver : Update the OS to latest pkgs] ******************************** changed: [web2] changed: [web1] TASK [webserver : Ensure HTTPD is installed] *********************************** changed: [web1] changed: [web2] TASK [webserver : Start HTTPD and enable on boot] ****************************** changed: [web1] changed: [web2] TASK [webserver : Deploy config httpd] ***************************************** changed: [web2] changed: [web1] TASK [webserver : Check if doc_root exists] ************************************ ok: [web1] ok: [web2] TASK [webserver : Create doc_root if it doesn't exist] ************************* changed: [web2] changed: [web1] TASK [webserver : Insert custom index.html in host] **************************** changed: [web2] changed: [web1] RUNNING HANDLER [server_basic : NTP restart] *********************************** changed: [web1] changed: [web2] RUNNING HANDLER [webserver : Restart Apache] *********************************** changed: [web1] changed: [web2] PLAY [db] ********************************************************************** TASK [setup] ******************************************************************* ok: [db1] TASK [server_basic : Ensure Iptables is installed] ***************************** skipping: [db1] TASK [server_basic : Disable firewall ufw] ************************************* skipping: [db1] TASK [server_basic : Ensure Iptables is installed] ***************************** ok: [db1] TASK [server_basic : Ensure iptables-services is installed] ******************** changed: [db1] TASK [server_basic : disable firewalld] **************************************** ok: [db1] TASK [server_basic : Check if ntp is installed and updated] ******************** changed: [db1] TASK [server_basic : Configure the ntp daemon with a template] ***************** changed: [db1] TASK [server_basic : disable selinux] ****************************************** changed: [db1] TASK [dbserver : Update the OS to latest pkgs] ********************************* changed: [db1] TASK [dbserver : Ensure Mysql is installed] ************************************ changed: [db1] TASK [dbserver : Start Mysql and enable on boot] ******************************* changed: [db1] RUNNING HANDLER [server_basic : NTP restart] *********************************** changed: [db1] PLAY RECAP ********************************************************************* db1 : ok=11 changed=8 unreachable=0 failed=0 web1 : ok=16 changed=12 unreachable=0 failed=0 web2 : ok=16 changed=12 unreachable=0 failed=0 also check ansible galaxy for roles: https://galaxy.ansible.com/

Regards.

Unix Systems: 

Add new comment

Filtered HTML

  • Web page addresses and e-mail addresses turn into links automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <blockquote> <code> <ul> <ol> <li> <dl> <dt> <dd>
  • Lines and paragraphs break automatically.

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
By submitting this form, you accept the Mollom privacy policy.

Fatal error: Class CToolsCssCache contains 1 abstract method and must therefore be declared abstract or implement the remaining methods (DrupalCacheInterface::__construct) in /homepages/37/d228974590/htdocs/sites/all/modules/ctools/includes/css-cache.inc on line 52