You are here

Part15: GFS2 My Study Notes for Red Hat Certificate of Expertise in Clustering and Storage Management Exam (EX436)

GFS2:

La regla con con GFS2 es que entre más pequeño mejor: es mejor tener 10 sistemas de archivos de 1TB en lugar de un sistema de archivos de 10 TB.
En general, los bloques de 4K se prefieren porque 4K es el tamaño de página (memoria) predeterminada para Linux. A diferencia de otros sistemas de archivos, GFS2 realiza la mayoría de sus operaciones mediante búferes de kernel de 4k. Si su tamaño de bloque es de 4K, el kernel tiene menos trabajo para manipular los búferes
se recomienda ejecutar un sistema de archivos que esté más del 85 por ciento lleno, aunque esta cantidad puede variar según la carga de trabajo.
Por lo general se recomienda montar los archivos de sistemas GFS2 con los argumentos noatime y nodiratime. Esto permite a GFS2 gastar menos tiempo en actualizar nodos de disco para cada acceso.
Debe apagar SELinux en sistemas de archivos GFS2.

Configure and create a GFS2 filesystem.

We install the gfs2 utils on all nodes:

[root@foserver01 vms]# for i in 1 2 3 4; do ssh centos-clase$i "yum install -y -q gfs2-utils.x86_64"; done

We create a clusterd vg:

[root@centos-clase1 ~]# pvcreate /dev/mapper/clusterhd0p2
Physical volume "/dev/mapper/clusterhd0p2" successfully created
[root@centos-clase1 ~]#
[root@centos-clase1 ~]# vgcreate -c y /dev/vggfs /dev/mapper/clusterhd0p2
Clustered volume group "vggfs" successfully created
[root@centos-clase1 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
vg_rootvg 1 4 0 wz--n- 19.59g 6.88g
vg_rootvg 1 4 0 wz--n- 19.59g 6.88g
vgcluster 1 1 0 wz--nc 1016.00m 616.00m
vggfs 1 0 0 wz--nc 860.00m 860.00m
vgsamba 1 1 0 wz--nc 4.00m 0
[root@centos-clase1 ~]# lvcreate -L 400M -n lvgfs2 /dev/vggfs
Logical volume "lvgfs2" created

No we create the gfs2 filesystem, we use lock_dlm , and we create 3 journals one for each node in the cluster:

[root@centos-clase1 ~]# mkfs.gfs2 -t fomvsclu:gfs -p lock_dlm -j 3 /dev/vggfs/lvgfs2
This will destroy any data on /dev/vggfs/lvgfs2.
It appears to contain: symbolic link to `../dm-15'

Are you sure you want to proceed? [y/n] y

cksize: 4096
Device Size 0.39 GB (102400 blocks)
Filesystem Size: 0.39 GB (102397 blocks)
Journals: 3
Resource Groups: 2
Locking Protocol: "lock_dlm"
Lock Table: "fomvsclu:gfs"
UUID: 0525103a-ba71-643b-d021-3fca6b5680b8

We can now mount the fs on all 3 nodes:

[root@foserver01 vms]# for i in 1 2 3 4; do ssh centos-clase$i "mkdir /gfs"; done
[root@foserver01 vms]# for i in 1 2 3 4; do ssh centos-clase$i "mount /dev/vggfs/lvgfs2 /gfs"; done
[root@foserver01 vms]# for i in 1 2 3 ; do ssh centos-clase$i "df -h | grep -i gfs"; done
/dev/mapper/vggfs-lvgfs2
400M 388M 13M 97% /gfs
/dev/mapper/vggfs-lvgfs2
400M 388M 13M 97% /gfs
/dev/mapper/vggfs-lvgfs2
400M 388M 13M 97% /gfs

You can see on a new formated gfs2 filesystem the size of 388M with and empy FS, this is because we have 3 intent logs of 128 megs 3*128=384MB

[root@centos-clase1 ~]# gfs2_tool journals /gfs
journal2 - 128MB
journal1 - 128MB
journal0 - 128MB
3 journal(s) found.

Extending the FS:

[root@centos-clase1 ~]# lvextend -L +250M /dev/mapper/vggfs-lvgfs2
Rounding size to boundary between physical extents: 252.00 MiB
Extending logical volume lvgfs2 to 652.00 MiB
Logical volume lvgfs2 successfully resized

We can first do a test mode with -T, it does the same, but doesn't commit the changes to disk:
[root@centos-clase1 ~]# gfs2_grow -T /dev/mapper/vggfs-lvgfs2

If it finishes ok, then we proceed:
[root@centos-clase1 ~]# gfs2_grow /dev/mapper/vggfs-lvgfs2
FS: Mount Point: /gfs
FS: Device: /dev/dm-15
FS: Size: 102397 (0x18ffd)
FS: RG size: 51188 (0xc7f4)
DEV: Size: 166912 (0x28c00)
The file system grew by 252MB.

[root@foserver01 vms]# for i in 1 2 3 ; do ssh centos-clase$i "df -h | grep -i gfs"; done
/dev/mapper/vggfs-lvgfs2
600M 388M 212M 65% /gfs
/dev/mapper/vggfs-lvgfs2
600M 388M 212M 65% /gfs
/dev/mapper/vggfs-lvgfs2
600M 388M 212M 65% /gfs

Let's say we wan't to add a new node to the cluster, we need to add another journal:

[root@centos-clase1 ~]# gfs2_tool journals /gfs
journal2 - 128MB
journal1 - 128MB
journal0 - 128MB
3 journal(s) found.

Now we are going to add another 128 meg journal, we need at least 128mb free on the FS:

[root@centos-clase1 ~]# gfs2_jadd -j 1 -J 128 /gfs
Filesystem: /gfs
Old Journals 3
New Journals 4

[root@centos-clase1 ~]# gfs2_tool journals /gfs
journal2 - 128MB
journal3 - 128MB
journal1 - 128MB
journal0 - 128MB
4 journal(s) found.
[root@centos-clase1 ~]# df -h
/dev/mapper/vggfs-lvgfs2
600M 518M 83M 87% /gfs

GFS2 log options:
data=[ordered|writeback] Cuando se establece data=ordered, los datos de usuario modificados en una transacción se volcan al disco antes de que la transacción sea enviada al disco. Esto debe evitar que el usuario vea bloques sin inicializar en un archivo después de una falla. Cuando se establece el modo data=writeback, los datos de usuario se escriben al disco en cualquier momento después de que es ensuciado; este modo no ofrece las mismas garantías de consistencia que el modo ordered, pero debe ser un poco más rápido en algunas cargas de trabajo. El valor predeterminado es el modo ordered.

We can aslo force, a file or a dir on a gfs2 filesystem, to write all the data(not only metadata) to the journal before flushing it to disk, using file attributes, lsattr|chattr, man chattr describes the option:

A file with the ‘j’ attribute has all of its data written to the ext3 journal before being written to the file itself, if the filesystem is mounted with the "data=ordered"
or "data=writeback" options. When the filesystem is mounted with the "data=journal" option all file data is already journalled and this attribute has no effect. Only the
superuser or a process possessing the CAP_SYS_RESOURCE capability can set or clear this attribute.

Si le damos el atributo al directorio todo lo que se cree por debajo, tendra el atributo:
[root@centos-clase1 gfs]# mkdir prueba
[root@centos-clase1 gfs]# lsattr -a prueba
--------------- prueba/.
[root@centos-clase1 gfs]# chattr +j prueba
[root@centos-clase1 gfs]# lsattr -a prueba
---------j----- prueba/.
[root@centos-clase1 gfs]# cd prueba/
[root@centos-clase1 prueba]# ls
[root@centos-clase1 prueba]# touch pepe
[root@centos-clase1 prueba]# lsattr -a pepe
---------j----- pepe
[root@centos-clase1 prueba]#

To mount with ACLs, we need to use the -o remount,acl YOU HAVE TO DO IT ON ALL NODES!! or add it to the /etc/fstab file:

[root@centos-clase1 prueba]# mount -o remount,acl /gfs
[root@centos-clase1 prueba]# mount | grep -i /gfs
/dev/mapper/vggfs-lvgfs2 on /gfs type gfs2 (rw,seclabel,relatime,hostdata=jid=0,acl)

To use user quotas, we have 3 mount options:

quota=on ---> quotas activated and enforced
quota=account --> quotas accounting activated, but not enforced
quota=off ---> quotas not activated

[root@foserver01 vms]# for i in 1 2 3 ; do ssh centos-clase$i "mount -o remount,quota=on /gfs"; done

initialize the quota database:

[root@centos-clase1 ~]# quotacheck -ug /gfs
[root@centos-clase1 ~]#

with edquota we can assing quota limits for users, on all file systems with quotas activated:

[root@centos-clase1 ~]# edquota -u ibmsop

We can also use the setquota command to script quotas.

By default gfs2 filesystems will update quota info every minute, you can influence this interval by using a mountpoint option: quota_quantum=

You can also force a quota info update to disc using the quotasync -a command.

you can check user quotas by using quota -u command::

[root@centos-clase1 ~]# quotasync -a
[root@centos-clase1 ~]# quota -u ibmsop
Disk quotas for user ibmsop (uid 500): none

And get a report on all quotas in a fs with:

[root@centos-clase1 ~]# repquota /gfs
*** Report for user quotas on device /dev/dm-15
Block grace time: 00:00; Inode grace time: 00:00
Block limits File limits
User used soft hard grace used soft hard grace
----------------------------------------------------------------------
root -- 28 0 0 0 0 0

SUPERBLOCK CHANGES, if for example we want o take this gfs2 fs to another cluster, we need to change the cluster name from the superblock:

Tune a GFS2 superblock
gfs2_tool sb proto [newval]
gfs2_tool sb table [newval]
gfs2_tool sb ondisk [newval]
gfs2_tool sb multihost [newval]
gfs2_tool sb all

[root@centos-clase1 ~]# gfs2_tool sb /dev/mapper/vggfs-lvgfs2 all
mh_magic = 0x01161970
mh_type = 1
mh_format = 100
sb_fs_format = 1801
sb_multihost_format = 1900
sb_bsize = 4096
sb_bsize_shift = 12
no_formal_ino = 2
no_addr = 22
no_formal_ino = 1
no_addr = 21
sb_lockproto = lock_dlm
sb_locktable = fomvsclu:gfs
uuid = 0525103a-ba71-643b-d021-3fca6b5680b8
[root@centos-clase1 ~]# gfs2_tool sb /dev/mapper/vggfs-lvgfs2 proto
current lock protocol name = "lock_dlm"

as you can see on the locktable we have the clustername=fomvsclu:gfs

WE HAVE TO UMOuNT THE FS ON ALL NODES BEFORE MACKING A CHANGE TO THE SB!!

After umount we can modify:
[root@centos-clase1 ~]# gfs2_tool sb /dev/mapper/vggfs-lvgfs2 table fomvsclu:gfs

there is also the option of mounting the FS on a node, where the cluster(dlm is not working), on a node for backup for example, we can do this using:

First we umount the fs and deactivate the vg on ALL NODES!!!!:
[root@foserver01 vms]# for i in 1 2 3 ; do ssh centos-clase$i "umount /gfs"; done
[root@foserver01 vms]# for i in 1 2 3 ; do ssh centos-clase$i "df -h | grep -i gfs"; done
[root@foserver01 vms]# for i in 1 2 3 ; do ssh centos-clase$i "vgchange -a n /dev/vggfs"; done
0 logical volume(s) in volume group "vggfs" now active
0 logical volume(s) in volume group "vggfs" now active
0 logical volume(s) in volume group "vggfs" now active

and from a fourth node outside the cluster:

[root@centos-clase4 ~]# vgchange -ay vggfs --config 'global { locking_type = 0 }'
WARNING: Locking disabled. Be careful! This could corrupt your metadata.
Unable to determine exclusivity of lvgfs2
1 logical volume(s) in volume group "vggfs" now active

[root@centos-clase4 ~]# mount -o lockproto=lock_nolock,noatime,nodiratime /dev/vggfs/lvgfs2 /data
[root@centos-clase4 ~]# ls -l /data
total 16
-rw-r--r--. 1 root root 0 Sep 6 10:40 lol
drwxr-xr-x. 2 root root 3864 Sep 6 10:54 prueba
[root@centos-clase4 ~]#

We are going to use a gfs2 filesystem for our sambapub resource group:

[root@centos-clase2 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
clonedatosvg 1 2 0 wz--n- 484.00m 84.00m
vg_rootvg 1 4 0 wz--n- 19.59g 6.88g
vg_rootvg 1 4 0 wz--n- 19.59g 6.88g
vgcluster 1 1 0 wz--nc 1016.00m 616.00m
vgsamba 2 0 0 wz--nc 864.00m 864.00m
[root@centos-clase2 ~]# lvcreate -L 700M -n lvsamba /dev/vgsamba
Logical volume "lvsamba" created

In this case we are going to use a 64M journal, to use less space:

[root@centos-clase2 ~]# mkfs.gfs2 -J 64 -t fomvsclu:gfs2smb -p lock_dlm -j 3 /dev/vgsamba/lvsamba
This will destroy any data on /dev/vgsamba/lvsamba.
It appears to contain: symbolic link to `../dm-17'

Are you sure you want to proceed? [y/n] y

Device: /dev/vgsamba/lvsamba
Blocksize: 4096
Device Size 0.68 GB (179200 blocks)
Filesystem Size: 0.68 GB (179197 blocks)
Journals: 3
Resource Groups: 3
Locking Protocol: "lock_dlm"
Lock Table: "fomvsclu:gfs2smb"
UUID: d75b58ad-eaca-78b0-0248-03769a259241

[root@centos-clase2 ~]#

We mount and check on all nodes:

[root@foserver01 ~]# for i in 1 2 3 ; do ssh centos-clase$i "mount /dev/vgsamba/lvsamba /cifs-export"; done
[root@foserver01 ~]# for i in 1 2 3 ; do ssh centos-clase$i "df -h | grep cifs"; done
700M 196M 505M 28% /cifs-export
700M 196M 505M 28% /cifs-export
700M 196M 505M 28% /cifs-export
[root@foserver01 ~]# for i in 1 2 3 ; do ssh centos-clase$i "umount /cifs-export"; done

We can now configure the fs to be allways mounted on all 3 nodes, using /etc/fstab and configuring gfs service to mount the FS on boot or we can create a gfs2 resource and add it to the server:

We are going to do the later first:

[root@centos-clase1 ~]# ccs -h centos-clase1 --addresource lvm name=vgsamba vg_name=vgsamba
[root@centos-clase1 ~]# ccs -h centos-clase1 --addresource fs name=sambafspub mountpoint=/cifs-export device=/dev/vgsamba/lvsamba force_unmount=off fstype=gfs2 force_fsck=off options=acl,quota=on
[root@centos-clase1 ~]# ccs -h centos-clase1 --lsservices
resources:
lvm: name=vgsamba, vg_name=vgsamba
fs: name=sambafspub, force_fsck=off, force_unmount=off, fstype=gfs2, device=/dev/vgsamba/lvsamba, mountpoint=/cifs-export, options=acl,quota=on

We now modify the configuration of the resource in the /etc/cluster/cluster.conf:

We need to remove previous resources so attributes don't colide, we have to delete the old fs resource, because it has the same mount point: mountpoint="/cifs-export".

Now we can update the config and start the service:

[root@centos-clase2 ~]# clusvcadm -e service:sambapublic
Local machine trying to enable service:sambapublic...Success
service:sambapublic is now running on centosclu2hb1
[root@centos-clase2 ~]# mount | grep -i gfs2
/dev/mapper/vgsamba-lvsamba on /cifs-export type gfs2 (rw,seclabel,relatime,hostdata=jid=0)
[root@centos-clase2 ~]#

or with mountpoint options:

Sep 12 21:47:52 rgmanager [lvm] Starting volume group, vgsamba
Sep 12 21:47:53 rgmanager [fs] Running fsck on /dev/dm-20
Sep 12 21:47:53 rgmanager [fs] mounting /dev/dm-20 on /cifs-export
Sep 12 21:47:54 rgmanager [fs] mount -o acl,quota=on /dev/dm-20 /cifs-export

[root@centos-clase3 ~]# cat /etc/cluster/cluster.conf | grep -i options

[root@centos-clase2 ~]# mount | grep -i cif
/dev/mapper/vgsamba-lvsamba on /cifs-export type gfs2 (rw,seclabel,relatime,hostdata=jid=0,acl,quota=on)

this is a normal HA configuration, that doesn't make sense for a gfs2 shared filesystem, so we are goin to configure it without ha lvm:

We activate the vgs on all servers:

[root@foserver01 init.d]# for i in 1 2 3 ; do ssh centos-clase$i "vgchange -a y /dev/vgsamba"; done
1 logical volume(s) in volume group "vgsamba" now active
1 logical volume(s) in volume group "vgsamba" now active
1 logical volume(s) in volume group "vgsamba" now active
[root@foserver01 init.d]#

Remove the lvm resource from the resource group:

[root@centos-clase1 ~]# cman_tool version -r

[root@centos-clase1 ~]# clusvcadm -e service:sambapublic
Local machine trying to enable service:sambapublic...Success
service:sambapublic is now running on centosclu1hb1

Unix Systems: 

Add new comment

Filtered HTML

  • Web page addresses and e-mail addresses turn into links automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <blockquote> <code> <ul> <ol> <li> <dl> <dt> <dd>
  • Lines and paragraphs break automatically.

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
By submitting this form, you accept the Mollom privacy policy.
Error | HP-UX Tips & Tricks Site

Error

Error message

  • Warning: Cannot modify header information - headers already sent by (output started at /homepages/37/d228974590/htdocs/includes/common.inc:2567) in drupal_send_headers() (line 1207 of /homepages/37/d228974590/htdocs/includes/bootstrap.inc).
  • PDOException: SQLSTATE[42000]: Syntax error or access violation: 1142 INSERT command denied to user 'dbo229817041'@'217.160.155.192' for table 'watchdog': INSERT INTO {watchdog} (uid, type, message, variables, severity, link, location, referer, hostname, timestamp) VALUES (:db_insert_placeholder_0, :db_insert_placeholder_1, :db_insert_placeholder_2, :db_insert_placeholder_3, :db_insert_placeholder_4, :db_insert_placeholder_5, :db_insert_placeholder_6, :db_insert_placeholder_7, :db_insert_placeholder_8, :db_insert_placeholder_9); Array ( [:db_insert_placeholder_0] => 0 [:db_insert_placeholder_1] => cron [:db_insert_placeholder_2] => Attempting to re-run cron while it is already running. [:db_insert_placeholder_3] => a:0:{} [:db_insert_placeholder_4] => 4 [:db_insert_placeholder_5] => [:db_insert_placeholder_6] => http://www.hpuxtips.es/?q=content/part15-gfs2-my-study-notes-red-hat-certificate-expertise-clustering-and-storage-management [:db_insert_placeholder_7] => [:db_insert_placeholder_8] => 54.90.207.75 [:db_insert_placeholder_9] => 1512951341 ) in dblog_watchdog() (line 157 of /homepages/37/d228974590/htdocs/modules/dblog/dblog.module).
The website encountered an unexpected error. Please try again later.