You are here

Part 9. Sun Cluster 3.2:Create a HA failover resource inside a zone cluster

Now we are going to create/configure a zone cluster and install a HA failover sybase dataservice:

First we are going to create the dirs for the cluster zones:

mkdir -p /zones/clzone1
chmod 700 /zones/clzone1

No we are going to create a configure file:

[root@vm2:/zones]# cat zonecreate.file (10-14 18:34)
create -b -------> Full zone cluster

set zonepath=/zones/clzone1 -------> zone path where the zone software is installed
set brand=cluster
set enable_priv_net=true
set autoboot=true
set ip-type=shared
add node
set physical-host=vm1 --------> hostname of the voting node
set hostname=clzone1 -------> hostname of zone
add net
set address=11.0.0.130/24 ---------> zone Ip address
set physical=e1000g2 ----------->physical network
end
end
add sysid
set system_locale=C
set terminal=xterm
set security_policy=NONE
set nfs4_domain=dynamic
set timezone=MET
set root_password=/Lsaa7qgfbUTwks -------> encrypted root passwd(/etc/shadow)
end
add node
set physical-host=vm2 ----------> info for the second node
set hostname=clzone2
add net
set address=11.0.0.131/24
set physical=e1000g2
end
end
commit
exit

We create the zone cluster

[root@vm2:/zones]# clzc configure -f zonecreate.file zc1
On line 32 of zonecreate.file:
zc1: CCR transaction error
Failed to assign a subnet for zone zc1.
zc1: failed to verify
zc1: CCR transaction error
zc1: CCR transaction error
Failed to assign a subnet for zone zc1.
zc1: failed to verify
zc1: CCR transaction error

as you can see we get a subnet error, Thanks to Tim we found out that after upgrading/patching the 3.2 to release U3, you need to specify the number of zone clusters

[root@vm1:/]# cluster show-netprops (10-17 05:53)

=== Private Network ===

private_netaddr: 172.16.0.0
private_netmask: 255.255.248.0
max_nodes: 64
max_privatenets: 10

After:

root@vm1:/]# cluster set-netprops -p num_zoneclusters=12
[root@vm1:/]# cluster show-netprops

=== Private Network ===

private_netaddr: 172.16.0.0
private_netmask: 255.255.240.0
max_nodes: 64
max_privatenets: 10
num_zoneclusters: 12

[root@vm2:/zones]# clzc configure -f zonecreate.file zc1
[root@vm2:/zones]# clzc status (10-14 18:29)

=== Zone Clusters ===

--- Zone Cluster Status ---

Name Node Name Zone HostName Status Zone Status
---- --------- ------------- ------ -----------
zc1 vm1 clzone1 Offline Configured
vm2 clzone2 Offline Configured

[root@vm2:/zones]# clzc install zc1 (10-14 18:43)
Waiting for zone install commands to complete on all the nodes of the zone cluster "zc1"...
[root@vm2:/zones]# clzc status (10-14 18:57)

=== Zone Clusters ===

--- Zone Cluster Status ---

Name Node Name Zone HostName Status Zone Status
---- --------- ------------- ------ -----------
zc1 vm1 clzone1 Offline Installed
vm2 clzone2 Offline Installed
[root@vm2:/zones]# clzc boot zc1 (10-14 19:47)
Waiting for zone boot commands to complete on all the nodes of the zone cluster "zc1"...
[root@vm2:/zones]# clzc status (10-14 19:48)

=== Zone Clusters ===

--- Zone Cluster Status ---

Name Node Name Zone HostName Status Zone Status
---- --------- ------------- ------ -----------
zc1 vm1 clzone1 Offline Running
vm2 clzone2 Offline Running

No we can start creating our SYBASE rg:

We login to the zone cluster:

[root@vm2:/zones]# zlogin zc1
clzone2# ./clrg create -Z zc1 zc1-sybase-rg
clzone2# ./clrg status

=== Cluster Resource Groups ===

Group Name Node Name Suspended Status
---------- --------- --------- ------
zc1-sybase-rg clzone2 No Unmanaged
clzone1 No Unmanaged

We add the logical IP for the service to the zonecluster config:

[root@vm1:/]# clzc configure zc1 (12-02 06:34)
clzc:zc1> add net
clzc:zc1:net> set address=19.132.168.193
clzc:zc1:net> end
clzc:zc1> commit
clzc:zc1> exit

We also add it to /etc/hosts, and then we create the resource:

clzone2# ./clreslogicalhostname create -g zc1-sybase-rg -h sybase-ip sybaseip-rs

clzone2# ./clrg show

=== Resource Groups and Resources ===

Resource Group: zc1-sybase-rg
RG_description:
RG_mode: Failover
RG_state: Unmanaged
Failback: False
Nodelist: clzone2 clzone1

--- Resources for Group zc1-sybase-rg ---

Resource: sybaseip-rs
Type: SUNW.LogicalHostname:3
Type_version: 3
Group: zc1-sybase-rg
R_description:
Resource_project_name: default
Enabled{clzone2}: True
Enabled{clzone1}: True
Monitored{clzone2}: True
Monitored{clzone1}: True

He have our IP, now we are going to add storage, as allways we register the HA+ resource:

clzone2# ./clrt register SUNW.HAStoragePlus
clzone2# ./clresourcetype list
SUNW.LogicalHostname:3
SUNW.SharedAddress:2
SUNW.HAStoragePlus:8
clzone2#

In the global zone, We are going to create a new metaset with a new disk, global device d7

[root@vm1:/]# metaset -s sybase-ha -a -h vm1 vm2 (12-02 07:38)
[root@vm1:/]# metaset (12-02 07:39)

Set name = otros2, Set number = 5

Host Owner
vm1 Yes
vm2

Driv Dbase

d6 Yes

Set name = sybase-ha, Set number = 6

Host Owner
vm1
vm2
[root@vm1:/]# metaset -s sybase-ha -a /dev/did/dsk/d7 (12-02 07:39)
[root@vm1:/]# metaset -s sybase-ha (12-02 07:40)

Set name = sybase-ha, Set number = 6

Host Owner
vm1 Yes
vm2

Driv Dbase

d7 Yes
[root@vm1:sybase-ha/dsk]# metainit -s sybase-ha d20 1 1 /dev/did/dsk/d7s0
sybase-ha/d20: Concat/Stripe is setup
[root@vm1:sybase-ha/dsk]# metainit -s sybase-ha d2 -m d20 (12-02 07:44)
sybase-ha/d2: Mirror is setup
[root@vm1:sybase-ha/dsk]# metainit -s sybase-ha d200 -p d2 150mb (12-02 07:44)
d200: Soft Partition is setup
[root@vm1:sybase-ha/dsk]# metainit -s sybase-ha d300 -p d2 150mb (12-02 07:44)
d300: Soft Partition is setup
[root@vm1:sybase-ha/dsk]# (12-02 07:45)
[root@vm1:sybase-ha/dsk]# newfs /dev/md/sybase-ha/rdsk/d200 (12-02 07:47)
newfs: construct a new file system /dev/md/sybase-ha/rdsk/d200: (y/n)? y
/dev/md/sybase-ha/rdsk/d200: 307200 sectors in 150 cylinders of 64 tracks, 32 sectors
150.0MB in 10 cyl groups (16 c/g, 16.00MB/g, 7680 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 32832, 65632, 98432, 131232, 164032, 196832, 229632, 262432, 295232,
[root@vm1:sybase-ha/dsk]# newfs /dev/md/sybase-ha/rdsk/d300 (12-02 07:47)
newfs: construct a new file system /dev/md/sybase-ha/rdsk/d300: (y/n)? y
/dev/md/sybase-ha/rdsk/d300: 307200 sectors in 150 cylinders of 64 tracks, 32 sectors
150.0MB in 10 cyl groups (16 c/g, 16.00MB/g, 7680 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 32832, 65632, 98432, 131232, 164032, 196832, 229632, 262432, 295232,
[root@vm1:sybase-ha/dsk]# clzc configure zc1 (12-02 07:48)
clzc:zc1> add fs
clzc:zc1:fs> info
fs:
dir not specified
special not specified
raw not specified
type not specified
options: []
clzc:zc1:fs> set dir=/sybase-db
clzc:zc1:fs> set special=/dev/md/sybase-ha/dsk/d200
clzc:zc1:fs> set raw=/dev/md/sybase-ha/rdsk/d200
clzc:zc1:fs> set type=ufs
clzc:zc1:fs> end
clzc:zc1> add fs
clzc:zc1:fs> set dir=/sybase-bin
clzc:zc1:fs> set special=/dev/md/sybase-ha/dsk/d300
clzc:zc1:fs> set raw=/dev/md/sybase-ha/rdsk/d300
clzc:zc1:fs> set type=ufs
clzc:zc1:fs> end
clzc:zc1> commit
clzc:zc1> info
zonename: zc1
zonepath: /zones/clzone1
autoboot: true
brand: cluster
bootargs:
pool:
limitpriv:
scheduling-class:
ip-type: shared
enable_priv_net: true
fs:
dir: /sybase-db
special: /dev/md/sybase-ha/dsk/d200
raw: /dev/md/sybase-ha/rdsk/d200
type: ufs
options: []
fs:
dir: /sybase-bin
special: /dev/md/sybase-ha/dsk/d300
raw: /dev/md/sybase-ha/rdsk/d300
type: ufs
options: []
net:
address: 19.132.168.193
physical: auto
sysid:
root_password: /LB7qgfbUTwks
name_service: NONE
nfs4_domain: dynamic
security_policy: NONE
system_locale: C
terminal: xterm
timezone: MET
node:
physical-host: vm1
hostname: clzone1
net:
address: 19.132.168.191/24
physical: e1000g2
defrouter not specified
node:
physical-host: vm2
hostname: clzone2
net:
address: 19.132.168.192/24
physical: e1000g2
defrouter not specified

No we create the resource:

clzone2#./clresourcset -t SUNW.HAStoragePlus -g zc1-sybase-rg -p FilesystemMountPoints=/sybase-db sybase-ha-rs

I forgot to add the second FS, to add it use:

clzone1# ./clresource set -g zc1-sybase-rg -p FilesystemMountPoints=/sybase-db,/sybase-bin sybase-ha-rs

clzone1# ./clrg show zc1-sybase-rg

=== Resource Groups and Resources ===

Resource Group: zc1-sybase-rg
RG_description:
RG_mode: Failover
RG_state: Unmanaged
Failback: False
Nodelist: clzone2 clzone1

--- Resources for Group zc1-sybase-rg ---

Resource: sybaseip-rs
Type: SUNW.LogicalHostname:3
Type_version: 3
Group: zc1-sybase-rg
R_description:
Resource_project_name: default
Enabled{clzone2}: True
Enabled{clzone1}: True
Monitored{clzone2}: True
Monitored{clzone1}: True

Resource: sybase-ha-rs
Type: SUNW.HAStoragePlus:8
Type_version: 8
Group: zc1-sybase-rg
R_description:
Resource_project_name: default
Enabled{clzone2}: True
Enabled{clzone1}: True
Monitored{clzone2}: True
Monitored{clzone1}: True

I'm going to try and start the RG as is, before adding the APP resource:

clzone2# ./clrg status

=== Cluster Resource Groups ===

Group Name Node Name Suspended Status
---------- --------- --------- ------
zc1-sybase-rg clzone2 No Unmanaged
clzone1 No Unmanaged

clzone2# ./clrg manage zc1-sybase-rg
clzone2# ./clrg status

=== Cluster Resource Groups ===

Group Name Node Name Suspended Status
---------- --------- --------- ------
zc1-sybase-rg clzone2 No Offline
clzone1 No Offline

clzone2# ./clrg online zc1-sybase-rg
bash-3.00# ./clrg status

=== Cluster Resource Groups ===

Group Name Node Name Suspended Status
---------- --------- --------- ------
zc1-sybase-rg clzone2 No Online
clzone1 No Offline

It started ok, we check out mount points and network:

bash-3.00# df -h | grep -i syba
/sybase-db 140M 1.0M 125M 1% /sybase-db
/sybase-bin 140M 1.0M 125M 1% /sybase-bin
bash-3.00# ifconfig -a
lo0:1: flags=2001000849 mtu 8232 index 1
zone zc1
inet 127.0.0.1 netmask ff000000
e1000g2: flags=1000843 mtu 1500 index 2
inet 19.132.168.182 netmask ffffff00 broadcast 19.132.168.255
groupname sc_ipmp0
ether 8:0:27:1d:69:a9
e1000g2:3: flags=1000843 mtu 1500 index 2
zone zc1
inet 19.132.168.192 netmask ffffff00 broadcast 19.132.168.255
e1000g2:4: flags=1001040843 mtu 1500 index 2
zone zc1
inet 19.132.168.193 netmask ffffff00 broadcast 19.132.168.255
e1000g3: flags=69040843 mtu 1500 index 3
inet 19.132.168.185 netmask ffffff00 broadcast 19.132.168.255
groupname sc_ipmp0
ether 8:0:27:9e:57:93
clprivnet0: flags=1009843 mtu 1500 index 6
inet 172.16.4.2 netmask fffffe00 broadcast 172.16.5.255
ether 0:0:0:0:0:2
clprivnet0:3: flags=1009843 mtu 1500 index 6
zone zc1
inet 172.16.6.130 netmask ffffff80 broadcast 172.16.6.255
bash-3.00#

[root@vm1:sybase-ha/dsk]# clzc status (12-02 08:13)

=== Zone Clusters ===

--- Zone Cluster Status ---

Name Node Name Zone HostName Status Zone Status
---- --------- ------------- ------ -----------
zc1 vm1 clzone1 Online Running
vm2 clzone2 Online Running

[root@vm1:sybase-ha/dsk]# clzc show (12-02 08:27)

=== Zone Clusters ===

Zone Cluster Name: zc1
zonename: zc1
zonepath: /zones/clzone1
autoboot: TRUE
ip-type: shared
enable_priv_net: TRUE

--- Solaris Resources for zc1 ---

Resource Name: net
address: 19.132.168.193
physical: auto

Resource Name: fs
dir: /sybase-db
special: /dev/md/sybase-ha/dsk/d200
raw: /dev/md/sybase-ha/rdsk/d200
type: ufs
options: []

Resource Name: fs
dir: /sybase-bin
special: /dev/md/sybase-ha/dsk/d300
raw: /dev/md/sybase-ha/rdsk/d300
type: ufs
options: []

--- Zone Cluster Nodes for zc1 ---

Node Name: vm1
physical-host: vm1
hostname: clzone1

Node Name: vm2
physical-host: vm2
hostname: clzone2

Ok, working, now we can register and configure the Sybase resource, leave that for another post..

Unix Systems: 

Add new comment

Filtered HTML

  • Web page addresses and e-mail addresses turn into links automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <blockquote> <code> <ul> <ol> <li> <dl> <dt> <dd>
  • Lines and paragraphs break automatically.

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
By submitting this form, you accept the Mollom privacy policy.
Error | HP-UX Tips & Tricks Site

Error

Error message

  • Warning: Cannot modify header information - headers already sent by (output started at /homepages/37/d228974590/htdocs/includes/common.inc:2567) in drupal_send_headers() (line 1207 of /homepages/37/d228974590/htdocs/includes/bootstrap.inc).
  • PDOException: SQLSTATE[42000]: Syntax error or access violation: 1142 INSERT command denied to user 'dbo229817041'@'217.160.155.192' for table 'watchdog': INSERT INTO {watchdog} (uid, type, message, variables, severity, link, location, referer, hostname, timestamp) VALUES (:db_insert_placeholder_0, :db_insert_placeholder_1, :db_insert_placeholder_2, :db_insert_placeholder_3, :db_insert_placeholder_4, :db_insert_placeholder_5, :db_insert_placeholder_6, :db_insert_placeholder_7, :db_insert_placeholder_8, :db_insert_placeholder_9); Array ( [:db_insert_placeholder_0] => 0 [:db_insert_placeholder_1] => cron [:db_insert_placeholder_2] => Attempting to re-run cron while it is already running. [:db_insert_placeholder_3] => a:0:{} [:db_insert_placeholder_4] => 4 [:db_insert_placeholder_5] => [:db_insert_placeholder_6] => http://www.hpuxtips.es/?q=content/part-9-sun-cluster-32create-ha-failover-resource-inside-zone-cluster [:db_insert_placeholder_7] => [:db_insert_placeholder_8] => 54.90.207.75 [:db_insert_placeholder_9] => 1512950998 ) in dblog_watchdog() (line 157 of /homepages/37/d228974590/htdocs/modules/dblog/dblog.module).
The website encountered an unexpected error. Please try again later.