You are here

Ansible Basics Continued. And HAproxy Load Balancer Role


We are going to continue with our configuration playbook configuration just to practice a little more.

Here is the git repo with all the files that we work with:

https://github.com/likid0/AnsibleSeries.git



What we want to achive for now is: 

Get iptables enabled on the load balancer and web servers, open port 80 on all of them.
Install and configure a loadbalancer role using HAproxy
If new web nodes are added, configure them in the HAproxy config
Check the work.

First we are going to add to the server-basic role the update of all packages in the servers, and remove it from the webserver and db role:

[liquid@liquid-ibm:server_basic/tasks]$ pwd                                                                                                                                              (10-07 13:02)
/home/liquid/vagrant/ansible/ansible/roles/server_basic/tasks
[liquid@liquid-ibm:server_basic/tasks]$ cat main.yaml                                                                                                                                    (10-07 13:04)
---
- include: iptables.yaml
- include: ntp.yaml
- include: selinux.yaml
- include: update-soft.yaml

[liquid@liquid-ibm:roles/server_basic]$ cat tasks/iptables.yaml                                                                                                                          (10-12 04:35)
---
- name: Ensure Iptables is installed
  apt: name=iptables state=latest
  when: ansible_os_family == "Debian"
- name: Disable firewall ufw
  ufw: state=disabled policy=allow
  when: ansible_distribution == "Ubuntu"

- name: Ensure Iptables is installed
  yum: name=iptables state=latest
  when: ansible_os_family == "RedHat"
- name: Ensure iptables-services is installed
  yum: name=iptables-services state=latest
  when: ansible_os_family == "RedHat"
- name: disable firewalld
  service: name=firewalld enabled=no state=stopped
  when: ansible_distribution == "CentOS"
- name: Start iptables service
  service: name=iptables enabled=yes state=restarted
  when: ansible_distribution == "CentOS"

[liquid@liquid-ibm:roles/server_basic]$ cat tasks/ntp.yaml                                                                                                                               (10-12 04:35)
---
- name: Check if ntp is installed and updated
  yum: name=ntp state=latest
  when: ansible_os_family == "RedHat"
- name: Check if ntp is installed and updated
  apt: name=ntp state=latest
  when: ansible_os_family == "Debian"
- name: Configure the ntp daemon with a template
  template: src=ntp.j2 dest=/etc/ntp.conf
  notify: 
      - NTP restart

[liquid@liquid-ibm:roles/server_basic]$ cat tasks/update-soft.yaml                                                                                                                       (10-12 04:35)
---
- name: Update-software on node
  yum: name=* state=latest update_cache=yes
  when: ansible_os_family == "RedHat"

- name: update apt cache
  apt: update_cache=yes 
  when: ansible_os_family == "Debian"

- name: Update all packages on node
  apt:  upgrade=dist
  when: ansible_os_family == "Debian"


Ok, Now let's fix some ntp problems we had, on ubuntu the daemon in ntp and in centos ntpd, so we are going to use the include_vars module
very helpfull, when we need to add vars depending on certain conditions:

[liquid@liquid-ibm:server_basic/tasks]$ cat main.yaml                                                                                                                                    (10-07 13:00)
---
- include_vars: "{{ ansible_os_family }}.yaml"
- include: iptables.yaml
- include: ntp.yaml
- include: selinux.yaml
- include: update-soft.yaml

[liquid@liquid-ibm:server_basic/vars]$ cat main.yaml                                                                                                                                     (10-09 07:07)
---
ntp1server: hora.rediris.es
ntp2server: 0.centos.pool.ntp.org
[liquid@liquid-ibm:server_basic/vars]$ cat Debian.yaml                                                                                                                                   (10-09 07:07)
---
ntp_daemon: ntp
[liquid@liquid-ibm:server_basic/vars]$ cat RedHat.yaml                                                                                                                                   (10-09 07:07)
---
ntp_daemon: ntpd

[liquid@liquid-ibm:roles/server_basic]$ cat handlers/main.yaml                                                                                                                           (10-12 04:37)
---
- name: NTP restart
  service: name={{ ntpd_daemon }} state=restarted 



To work with different Oses/Distros there is a better way using the group_by module, but for the moment we are just using basics to get the grip on ansible.

Ok, so now that we have our server_basic role ready, lets go into the load_balancer role:

[liquid@liquid-ibm:ansible/roles]$ tree load_balancer                                                                                                                                    (10-09 07:40)
load_balancer
├── handlers
│   └── main.yaml
├── tasks
│   └── main.yaml
├── templates
│   └── haproxy.j2
└── vars
    └── main.yaml

First the tasks file:

- name: install haproxy
  apt: name=haproxy state=present
  when:  ansible_os_family  == "Debian"
- name: Enable init script
  replace: dest='/etc/default/haproxy' regexp='ENABLED=0' replace='ENABLED=1'
  when:  ansible_os_family  == "Debian"
- name: Update HAProxy config
  template: src=haproxy.j2 dest=/etc/haproxy/haproxy.cfg backup=yes
  notify: 
    - restart haproxy
- name: haproxy service running and enabled
  service: name=haproxy enabled=yes


Here we are using the replace module, with the regular expresion option, so we enable the init script for haproxy

[liquid@liquid-ibm:ansible/roles]$ cat load_balancer/handlers/main.yaml                                                                                                                  (10-09 07:41)
- name: restart haproxy
  service: name=haproxy state=restarted

[liquid@liquid-ibm:ansible/roles]$ cat load_balancer/vars/main.yaml                                                                                                                      (10-09 07:41)
haproxy_app_name: webapp
haproxy_mode: http
haproxy_enable_stats: enable 
haproxy_algorithm: roundrobin
haproxy_backend_servers:
  - {name: web1, ip: 10.0.1.5, port: 80, paramstring: cookie A check}
  - {name: web2, ip: 10.0.2.5, port: 80, paramstring: cookie A check}

[liquid@liquid-ibm:ansible/roles]$ cat load_balancer/templates/haproxy.j2                                                                                                                (10-09 07:41)
global
  log 127.0.0.1 local0 notice
  maxconn 2000
  user haproxy
  group haproxy

defaults
  log     global
  mode    http
  option  httplog
  option  dontlognull
  retries 3
  option redispatch
  timeout connect  5000
  timeout client  10000
  timeout server  10000

listen {{haproxy_app_name}} 0.0.0.0:80
  mode {{haproxy_mode}}
  balance {{haproxy_algorithm}}
  option httpclose
  option forwardfor
  {% for server in haproxy_backend_servers %}
  server {{server.name}} {{server.ip}}:{{server.port}} {{server.paramstring}}
  {% endfor %}    

Here we can see we use a for to go trough all the haproxy_backend_servers variables we have defined.
We access the variables in a dict way with server.var

ok, so lets run the play:

[liquid@liquid-ibm:ansible/ansible]$ ansible-playbook site.yaml --tags lb                                                                                                                (10-09 07:35)

PLAY [web] *********************************************************************

TASK [setup] *******************************************************************
ok: [web2]
ok: [web1]

PLAY [db] **********************************************************************

TASK [setup] *******************************************************************
ok: [db1]

PLAY [lb] **********************************************************************

TASK [setup] *******************************************************************
ok: [lb1]

TASK [server_basic : include_vars] *********************************************
ok: [lb1]

TASK [server_basic : Ensure Iptables is installed] *****************************
ok: [lb1]

TASK [server_basic : Disable firewall ufw] *************************************
ok: [lb1]

TASK [server_basic : Ensure Iptables is installed] *****************************
skipping: [lb1]

TASK [server_basic : Ensure iptables-services is installed] ********************
skipping: [lb1]

TASK [server_basic : disable firewalld] ****************************************
skipping: [lb1]

TASK [server_basic : Start iptables service] ***********************************
skipping: [lb1]

TASK [server_basic : Check if ntp is installed and updated] ********************
skipping: [lb1]

TASK [server_basic : Check if ntp is installed and updated] ********************
ok: [lb1]

TASK [server_basic : Configure the ntp daemon with a template] *****************
ok: [lb1]

TASK [server_basic : disable selinux] ******************************************
skipping: [lb1]

TASK [server_basic : Update-software on node] **********************************
skipping: [lb1]

TASK [server_basic : update apt cache] *****************************************
skipping: [lb1]

TASK [server_basic : Update all packages on node] ******************************
skipping: [lb1]

TASK [load_balancer : install haproxy] *****************************************
ok: [lb1]

TASK [load_balancer : haproxy service running and enabled] *********************
ok: [lb1]

TASK [load_balancer : Update HAProxy config] ***********************************
ok: [lb1]

PLAY RECAP *********************************************************************
db1                        : ok=1    changed=0    unreachable=0    failed=0   
lb1                        : ok=9    changed=0    unreachable=0    failed=0   
web1                       : ok=1    changed=0    unreachable=0    failed=0   
web2                       : ok=1    changed=0    unreachable=0    failed=0   

Lets check its working:

[liquid@liquid-ibm:ansible/roles]$ curl http://10.0.3.5                                                                                                                                  (10-09 07:44)

Ansible Test. Server IP: 192.168.121.70 hostname: web1 Distro: CentOS

[liquid@liquid-ibm:ansible/roles]$ curl http://10.0.3.5 (10-09 07:45)

Ansible Test. Server IP: 192.168.121.5 hostname: web2 Distro: CentOS

[liquid@liquid-ibm:ansible/roles]$ curl http://10.0.3.5 (10-09 07:45)

Ansible Test. Server IP: 192.168.121.70 hostname: web1 Distro: CentOS

[liquid@liquid-ibm:ansible/roles]$ curl http://10.0.3.5 (10-09 07:45)

Ansible Test. Server IP: 192.168.121.5 hostname: web2 Distro: CentOS

Ok, we have a very basic haproxy working!, but what we really want is the list of backend servers to get created in a dynamic way, looking for the servers in the web group or web role: in our template we use: [liquid@liquid-ibm:load_balancer/templates]$ cat haproxy.j2 (10-09 10:10) global log 127.0.0.1 local0 notice maxconn 2000 user haproxy group haproxy defaults log global mode http option httplog option dontlognull retries 3 option redispatch timeout connect 5000 timeout client 10000 timeout server 10000 listen {{haproxy_app_name}} 0.0.0.0:80 mode {{haproxy_mode}} balance {{haproxy_algorithm}} option httpclose option forwardfor except 127.0.0.0/8 {% for host in groups.web %} server {{ host }} {{ hostvars[host]['ansible_' + iface].ipv4.address }}:{{ httpd_port }} {{ paramstring }} {% endfor %} So we can see in the last 3 lines, we are doing a for of all the hosts that we have in the web group, this for used with hostvars permits us to get variables from other hosts, we then get the IP from the facts of the servers, iface httpd_port and paramstring are static Variables that we have specified in our variables file: [liquid@liquid-ibm:load_balancer/vars]$ cat main.yaml (10-09 10:14) haproxy_app_name: webapp haproxy_mode: http haproxy_enable_stats: enable haproxy_algorithm: roundrobin httpd_port: 80 haproxy_backend_servers: iface: eth2 paramstring: cookie A check So now, if we wan't to increase the number of webservers we would just have to add the server to the web group and run the playbook again. Lets check: [liquid@liquid-ibm:ansible/ansible]$ cat inventory (10-09 10:55) [web] web1 ansible_ssh_host=10.0.1.5 web2 ansible_ssh_host=10.0.2.5 web3 ansible_ssh_host=10.0.5.5 [lb] lb1 ansible_ssh_host=10.0.3.5 [db] db1 ansible_ssh_host=10.0.4.5 [dc1-madrid:children] web lb [dc2-madrid:children] db [dc-madrid:children] dc1-madrid dc2-madrid [dc-madrid:vars] ansible_ssh_user=vagrant We add web3 server to our inventory, and run the play again: ........ TASK [server_basic : disable selinux] ****************************************** skipping: [lb1] TASK [load_balancer : install haproxy] ***************************************** changed: [lb1] TASK [load_balancer : Enable init script] ************************************** changed: [lb1] TASK [load_balancer : Update HAProxy config] *********************************** changed: [lb1] TASK [load_balancer : haproxy service running and enabled] ********************* ok: [lb1] RUNNING HANDLER [server_basic : NTP restart] *********************************** changed: [lb1] RUNNING HANDLER [load_balancer : restart haproxy] ****************************** changed: [lb1] PLAY RECAP ********************************************************************* db1 : ok=14 changed=9 unreachable=0 failed=0 lb1 : ok=14 changed=8 unreachable=0 failed=0 web1 : ok=19 changed=14 unreachable=0 failed=0 web2 : ok=19 changed=14 unreachable=0 failed=0 web3 : ok=19 changed=14 unreachable=0 failed=0 Now we test our work: liquid@liquid-ibm:ansible/ansible]$ curl 10.0.3.5 (10-09 22:59)

Ansible Test. Server IP: 192.168.121.153 hostname: web1 Distro: CentOS

[liquid@liquid-ibm:ansible/ansible]$ curl 10.0.3.5 (10-09 22:59)

Ansible Test. Server IP: 192.168.121.218 hostname: web2 Distro: CentOS

[liquid@liquid-ibm:ansible/ansible]$ curl 10.0.3.5 (10-09 22:59)

Ansible Test. Server IP: 192.168.121.116 hostname: web3 Distro: CentOS

Ok, so thats our load balancer working.
Unix Systems: 

Add new comment

Filtered HTML

  • Web page addresses and e-mail addresses turn into links automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <blockquote> <code> <ul> <ol> <li> <dl> <dt> <dd>
  • Lines and paragraphs break automatically.

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
By submitting this form, you accept the Mollom privacy policy.

Fatal error: Class CToolsCssCache contains 1 abstract method and must therefore be declared abstract or implement the remaining methods (DrupalCacheInterface::__construct) in /homepages/37/d228974590/htdocs/sites/all/modules/ctools/includes/css-cache.inc on line 52