You are here

Part 7. Sun Cluster 3.2: Live Upgrading from 3.2u1 to 3.2u3

You have to say that live upgrade with a zfs root works like a charm, much better integrated than with UFS, this added to the root zsf snapshots is something special.

Here we are going to update via Live upgrade the 3 nodes in the cluster, and do a final reboot into the new BE to get it all working with the new cluster version.

The First thing is to update the quorum server, we have to disable the quorum device on the cluster nodes:

[root@vm2:/etc/cluster]# clq remove quoromd1 (10-04 18:42)
[root@vm2:/etc/cluster]# clq status (10-04 18:42)

=== Cluster Quorum ===

--- Quorum Votes Summary ---

Needed Present Possible
------ ------- --------
2 3 3

--- Quorum Votes by Node ---

Node Name Present Possible Status
--------- ------- -------- ------
vm3 1 1 Online
vm2 1 1 Online
vm1 1 1 Online

Once removed we stop the quorum server daemons:

root@x4200m2 # clquorumserver stop 9000
root@x4200m2 # clquorumserver stop 9001
root@x4200m2 # clquorumserver show
=== Quorum Server on port 9000 ===
clquorumserver: (C339181) Quorum server is not yet started on port "9000".

=== Quorum Server on port 9001 ===
clquorumserver: (C339181) Quorum server is not yet started on port "9001".

And remove the old software:

root@x4200m2 # pwd
/var/sadm/prod/SUNWentsyssc32u1
root@x4200m2 # ./uninstall
Unable to access a usable display on the remote system. Continue in command-line mode?(Y/N)
Y
Java Accessibility Bridge for GNOME loaded.

Ready to Uninstall
----------------
The following components will be uninstalled.

Product: Java Availability Suite
Uninstall Location: /var/sadm/prod/SUNWentsyssc32u1
Space Reclaimed: 253.81 KB
---------------------------------------------------
Quorum Server

1. Uninstall
2. Start Over
3. Exit Uninstallation

What would you like to do [1] {"<" goes back, "!" exits}? 1

Once the uninstall has finished, we upload the software to the server, and install the new quorum server:

root@x4200m2 # unzip suncluster-3_2u3-ga-solaris-x86.zip
Archive: suncluster-3_2u3-ga-solaris-x86.zip
Copyright � 2009 Sun Microsystems, Inc., 4150 Network Circle, Santa Clara,
California 95054, U.S.A. All rights reserved.

root@x4200m2 # ./installer
Unable to access a usable display on the remote system. Continue in command-line mode?(Y/N)
Y
Java Accessibility Bridge for GNOME loaded.

Installation Type
-----------------

Do you want to install the full set of Sun Java(TM) Availability Suite
Products and Services? (Yes/No) [Yes] {"<" goes back, "!" exits} No

Choose Software Components - Main Menu
-------------------------------
Note: "* *" indicates that the selection is disabled

[ ] 1. Sun Cluster Geographic Edition 3.2 11/09
[ ] 2. Quorum Server
[ ] 3. Monitoring Console 1.0 Update 1
[ ] 4. High Availability Session Store 4.4.3
[ ] 5. Sun Cluster 3.2 11/09
[ ] 6. Java DB 10.2.2.1
[ ] 7. Sun Cluster Agents 3.2 11/09
[ ] 8. All Shared Components

Enter a comma separated list of products to install, or press R to refresh
the list [] {"<" goes back, "!" exits}: 2

Choose Software Components - Confirm Choices
--------------------------------------------

Based on product dependencies for your selections, the installer will install:

[X] 2. Quorum Server

Installation Complete

Software installation has completed successfully. You can view the installation
summary and log by using the choices below. Summary and log files are available
in /var/sadm/install/logs/.

Once the installation has finished, we check the config file is ok, and start the instance:

root@x4200m2 # tail -1 /etc/scqsd/scqsd.conf
/usr/cluster/lib/sc/scqsd -d /var/scqsd -p 9000

root@x4200m2 # /usr/cluster/bin/clqs start 9000
root@x4200m2 # ps -ef | grep -i 9000
root 16330 1 0 15:50:54 ? 0:00 /usr/cluster/lib/sc/scqsd -i qd1 -d /var/scqsd -p 9000
root 16333 9822 0 15:51:03 pts/2 0:00 grep -i 9000
root 16331 16330 0 15:50:54 ? 0:00 /usr/cluster/lib/sc/scqsd -i qd1 -d /var/scqsd -p 9000

We check the state of the quorum daemon on port 9000

root@x4200m2 # /usr/cluster/bin/clqs show
=== Quorum Server on port 9000 ===

Disabled False

--- Cluster vmclus (id 0x4E743D17) Reservation ---

--- Cluster vmclus (id 0x4E743D17) Registrations ---

We are going to clear all reference to the cluster, before we add it back again

root@x4200m2 # /usr/cluster/bin/clqs clear -c vmclus -I 0x4E743D17 9000
The quorum server to be cleared must have been removed from the cluster. Clearing a valid quorum server could compromise the cluster quorum.
Do you want to continue? (yes or no): yes

We add the quorum device back into our cluster, and start with the cluster upgrade:

[root@vm3:/var/tmp]# clquorum add -t quorum_server -p qshost=x4200m2 -p port=9000 qd1 (10-04 21:08)
clquorum: (C927487) Cannot communicate with quorum device "qd1".

Now tha we have updated our quorum server we are going to start with the live upgrade of the cluster:

Remove the global mount point option on the /etc/vsftab of all your nodes, and if you are using did device names replace them for cXdXtX devices:

root@vm3:/]# vi /etc/vfstab (10-04 20:32)
"/etc/vfstab" 15 lines, 738 characters
#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#
fd - /dev/fd fd - no -
/proc - /proc proc - no -
/dev/zvol/dsk/rpool/swap - - swap - no -
#/dev/zvol/dsk/rpool/globaldev /dev/zvol/rdsk/rpool/globaldev /globaldevices ufs - yes -
/devices - /devices devfs - no -
sharefs - /etc/dfs/sharetab sharefs - no -
ctfs - /system/contract ctfs - no -
objfs - /system/object objfs - no -
swap - /tmp tmpfs - yes -
/dev/zvol/dsk/rpool/globaldev /dev/zvol/rdsk/rpool/globaldev /global/.devices/node@1 ufs 2 no -
#/dev/zvol/dsk/rpool/globaldev /dev/zvol/rdsk/rpool/globaldev /global/.devices/node@1 ufs 2 no global
#/dev/md/sdapa1/dsk/d100 /dev/md/sdapa1/rdsk/d100 /apache1 ufs 2 no global
~

Create a new boot enviroment in the same zpool, you need to do it in the 3 servers:

[root@vm3:/]# lucreate -n sun32u3 (10-04 20:32)
Checking GRUB menu...
System has findroot enabled GRUB
Analyzing system configuration.
Comparing source boot environment file systems with the
file system(s) you specified for the new boot environment. Determining
which file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment .
Source boot environment is .
Creating boot environment .
Cloning file systems from boot environment to create boot environment .
Creating snapshot for on .
Creating clone for on .
Setting canmount=noauto for in zone on .
Creating snapshot for on .
Creating clone for on .
Setting canmount=noauto for in zone on .
Saving existing file in top level dataset for BE as //boot/grub/menu.lst.prev.
Saving existing file in top level dataset for BE as //boot/grub/menu.lst.prev.
File propagation successful
Copied GRUB menu from PBE to ABE
No entry for BE in GRUB menu
Population of boot environment successful.
Creation of boot environment successful.

We unzip the new software, and create the auto response file, we select the shared components, so we can later install on the other Boot enviroment:

[root@vm3:tmp/Solaris_x86]# ./installer -no -nodisplay -saveState autoinstall.txt (10-04 21:18)

Welcome to the Sun Java(TM) Availability Suite; serious software made
simple...

Choose Software Components - Main Menu
-------------------------------
Note: "* *" indicates that the selection is disabled

[ ] 1. Sun Cluster Geographic Edition 3.2 11/09
* * Quorum Server
[ ] 3. Monitoring Console 1.0 Update 1
[ ] 4. High Availability Session Store 4.4.3
* * Sun Cluster 3.2 11/09
* * Java DB 10.2.2.1
[ ] 7. Sun Cluster Agents 3.2 11/09
[ ] 8. All Shared Components

Enter a comma separated list of products to install, or press R to refresh
the list [] {"<" goes back, "!" exits}: 8

Choose Software Components - Confirm Choices
--------------------------------------------

Based on product dependencies for your selections, the installer will install:

[X] 8. All Shared Components

Ready to Install
----------------
The following components will be installed.

Product: Java Availability Suite
Uninstall Location: /var/sadm/prod/SUNWentsyssc32u3
Space Required: 22.21 MB
---------------------------------------------------
All Shared Components

1. Install
2. Start Over
3. Exit Installation

What would you like to do [1] {"<" goes back, "!" exits}? 1

Java Availability Suite
|-1%--------------25%-----------------50%-----------------75%--------------100%|

No products have been installed because the "-no" option was entered at the
command line. If you want to install products, you can run the installer again
without entering the "-no" option.

Once it has finished we have out auto response file created, we can launch it against our boot enviroment:

[root@vm3:tmp/Solaris_x86]# lumount sun32u3 (10-04 21:28)
/.alt.sun32u3
[root@vm3:tmp/Solaris_x86]# ./installer -nodisplay -noconsole -state /var/tmp/Solaris_x86/autoinstall.txt -altroot /.alt.sun32u3 (10-04 21:28)
PSPHelper:getPackageRelativePathMap:DREFacade.getOSMajorReleaseNo() returned :5
PSPHelper:getPackageRelativePathMap:DREFacade.getOSMinorReleaseNo() returned :10
PSPHelper:getPackageRelativePathMap:DREFacade.getOSMajorReleaseNo() returned :5
PSPHelper:getPackageRelativePathMap:DREFacade.getOSMinorReleaseNo() returned :10
PSPHelper:getPackageRelativePathMap:DREFacade.getOSMajorReleaseNo() returned :5
PSPHelper:getPackageRelativePathMap:DREFacade.getOSMinorReleaseNo() returned :10
[root@vm3:tmp/Solaris_x86]#

We can check the state of the silen installation in the following log file:

[root@vm3:install/logs]# pwd (10-04 21:32)
/var/sadm/install/logs
[root@vm3:install/logs]# tail Java_Availability_Suite_install.B10042128 (10-04 21:32)
Installing Sun Cluster HA for Informix Dynamic Server as

## Installing part 1 of 1.

Installation of was successful.
Installed Package: SUNWscids

MFWK
Invoke the method upgradeMFWK of the class com.sun.entsys.upgrade.common.config.SolarisSCUpgradeHelper.
Install complete.

No we need to upgrade the cluster software vi scinstall, using the scinstall binary of the new VERSION!!

[root@vm3:/]# cd /var/tmp/Solaris_x86/Product/sun_cluster/Solaris_10/Tools (10-04 21:34)
[root@vm3:Solaris_10/Tools]# ls (10-04 21:34)
defaults dot.release lib locale scinstall
[root@vm3:Solaris_10/Tools]# (10-04 21:34)

I got this error, when launching the upgrade:
[root@vm3:install/logs]#./scinstall -u update -R /.alt.sun32u3
scinstall: Sun Cluster for Solaris 10 cannot be installed on SunOS 5.Solaris.
scinstall: scinstall did NOT complete successfully!

Looking at the ksh scinstall script I saw the problem was in and awk field, so I changed it:

The original script got field 2:
[root@vm3:Solaris_10/Tools]# cat /.alt.sun32u3/etc/release | grep Solaris | awk '{print $2}' (10-04 21:51)
Solaris

I had to change it to get field 3:
[root@vm3:Solaris_10/Tools]# cat /.alt.sun32u3/etc/release | grep Solaris | awk '{print $3}' (10-04 21:51)
10

After that It worked, fine:

[root@vm3:Solaris_10/Tools]# ./scinstall -u update -R /.alt.sun32u3 (10-04 21:52)

Starting upgrade of Sun Cluster framework software

Saving current Sun Cluster configuration

Do not boot this node into cluster mode until upgrade is complete.

Renamed "/.alt.sun32u3/etc/cluster/ccr" to "/.alt.sun32u3/etc/cluster/ccr.upgrade".

** Removing Sun Cluster framework packages **
Removing SUNWkscspmu..done
Removing SUNWksc.....done
Removing SUNWjscspmu..done
Removing SUNWjscman..done
Removing SUNWjsc.....done
Removing SUNWhscspmu..done
Removing SUNWhsc.....done
Removing SUNWfscspmu..done
Removing SUNWfsc.....done
Removing SUNWescspmu..done
Removing SUNWesc.....done
Removing SUNWdscspmu..done
Removing SUNWdsc.....done
Removing SUNWcscspmu..done
Removing SUNWcsc.....done
Removing SUNWsctelemetry..done
Removing SUNWscderby..done
Removing SUNWscspmu..done
Removing SUNWscspmr..done
Removing SUNWjfreechart..done
Removing SUNWscmautilr..done
Removing SUNWscmautil..done
Removing SUNWscmasau..done
Removing SUNWscmasasen..done
Removing SUNWscmasar..done
Removing SUNWscmasa..done
Removing SUNWmdmu....done
Removing SUNWmdmr....done
Removing SUNWscsam...done
Removing SUNWscsal...done
Removing SUNWscman...done
Removing SUNWscvm....done
Removing SUNWscsmf...done
Removing SUNWscgds...done
Removing SUNWscdev...done
Removing SUNWscnmu...done
Removing SUNWscnmr...done
Removing SUNWscrtlh..done
Removing SUNWscr.....done
Removing SUNWscscku..done
Removing SUNWscsckr..done
Removing SUNWsczu....done
Removing SUNWsccomzu..done
Removing SUNWsczr....done
Removing SUNWsccomu..done
Removing SUNWscu.....done

** Installing SunCluster 3.2 framework **
SUNWscu.....done
SUNWsccomu..done
SUNWsczr....done
SUNWsccomzu..done
SUNWsczu....done
SUNWscsckr..done
SUNWscscku..done
SUNWscr.....done
SUNWscrtlh..done
SUNWscnmr...done
SUNWscnmu...done
SUNWscdev...done
SUNWscgds...done
SUNWscsmf...done
SUNWscvm....done
SUNWscman...done
SUNWscsal...done
SUNWscsam...done
SUNWmdmr....done
SUNWmdmu....done
SUNWscmasa..done
SUNWscmasar..done
SUNWscmasasen..done
SUNWscmasau..done
SUNWscmautil..done
SUNWscmautilr..done
SUNWjfreechart..done
SUNWscspmr..done
SUNWscspmu..done
SUNWscderby..done
SUNWsctelemetry..done
SUNWcsc.....done
SUNWcscspmu..done
SUNWdsc.....done
SUNWdscspmu..done
SUNWesc.....done
SUNWescspmu..done
SUNWfsc.....done
SUNWfscspmu..done
SUNWhsc.....done
SUNWhscspmu..done
SUNWjsc.....done
SUNWjscman..done
SUNWjscspmu..done
SUNWksc.....done
SUNWkscspmu..done

Restored /.alt.sun32u3/etc/cluster/ccr.upgrade to /.alt.sun32u3/etc/cluster/ccr

Completed Sun Cluster framework upgrade

The release of "Sun Cluster Support for Oracle Real Application
Clusters" must correspond to the release of Sun Cluster software that
you just upgraded to. Upgrade "Sun Cluster Support for Oracle Real
Application Clusters" to the corresponding release.

Updating nsswitch.conf ... done

Log file - /.alt.sun32u3/var/cluster/logs/install/scinstall.upgrade.log.15475

Once the cluster framework has been updated we can upgrade all the data service agents, as follows:

[root@vm3:Solaris_10/Tools]# /.alt.sun32u3/usr/cluster/bin/scinstall -u update -s all -d ../../../sun_cluster_agents -R /.alt.sun32u3 (10-04 22:08)

Starting upgrade of Sun Cluster data services agents

List of upgradable data services agents:
(*) indicates selected for upgrade.

* apache
* dns
* hadb
* iws
* nfs
* oracle
* s1as
* s1mq
* sap
* livecache
* sapdb
* sapwebas
* sybase
* wls
* 9ias
* PostgreSQL
* container
* dhcp
* ids
* mqi
* mqs
* mys
* n1ge
* smb
* n1sps
* tomcat
* oracle_rac
* rac_svm

Upgrading Sun Cluster data services agents software

Do not boot this node into cluster mode until upgrade is complete.

** Removing Apache Web Server on Sun Cluster **
Removing SUNWscapc...done

** Installing Sun Cluster HA for Apache **

..........................
...........................

** Removing HA oracle_rac Data Service on Sun Cluster **
Removing SUNWscor....done
Removing SUNWscucm...done

** Installing Sun Cluster Oracle RAC **
SUNWscor....done
SUNWscucm...done

** Removing Solaris Volume Manager (SVM) module of the Sun Cluster framework for Oracle RAC **
Removing SUNWscmd....done

** Installing Sun Cluster Oracle RAC SVM **
SUNWscmd....done

Completed upgrade of Sun Cluster data services agents

Log file - /.alt.sun32u3/var/cluster/logs/install/scinstall.upgrade.log.24384

Now we can activate our Boot enviroment, so it starts on the next reboot:

[root@vm3:/]# luumount sun32u3 (10-04 22:16)
[root@vm3:/]# luactivate sun32u3 (10-04 22:16)
System has findroot enabled GRUB
Generating boot-sign, partition and slice information for PBE
Generating boot-sign for ABE
Generating partition and slice information for ABE
Copied boot menu from top level dataset.
Generating multiboot menu entries for PBE.
Generating multiboot menu entries for ABE.
Disabling splashimage
Re-enabling splashimage
No more bootadm entries. Deletion of bootadm entries is complete.
GRUB menu default setting is unaffected
Done eliding bootadm entries.

**********************************************************************

The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.

**********************************************************************

In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:

1. Boot from the Solaris failsafe or boot in Single User mode from Solaris
Install CD or Network.

2. Mount the Parent boot environment root slice to some directory (like
/mnt). You can use the following commands in sequence to mount the BE:

zpool import rpool
zfs inherit -r mountpoint rpool/ROOT/s10x_u9wos_14a
zfs set mountpoint= rpool/ROOT/s10x_u9wos_14a
zfs mount rpool/ROOT/s10x_u9wos_14a

3. Run utility with out any arguments from the Parent boot
environment root slice, as shown below:

/sbin/luactivate

4. luactivate, activates the previous working boot environment and
indicates the result.

5. Exit Single User mode and reboot the machine.

**********************************************************************

Modifying boot archive service
Propagating findroot GRUB for menu conversion.
File propagation successful
File propagation successful
File propagation successful
File propagation successful
File deletion successful
File deletion successful
File deletion successful
Activation of boot environment successful.

Once we get to this point we have to do the same thin on all the other nodes..

OK, so I have finished en all nodes, I have autoboot disabled, i'm going to shutdown the nodes, and boot the 3 of them at the same time from the new BE.

[root@vm2:/]# shutdown -y -g0 -i6 (10-04 19:28)

Shutdown started. Tue Oct 4 19:28:54 CEST 2011

Changing to init state 6 - please wait
Broadcast Message from root (pts/1) on vm2 Tue Oct 4 19:28:54...
THE SYSTEM vm2 IS BEING SHUT DOWN NOW ! ! !
Log off now or risk your files being damaged

propagating updated GRUB menu
Saving existing file in top level dataset for BE as //boot/grub/menu.lst.prev.
File propagation successful
File propagation successful
File propagation successful
File propagation successful

The 3 nodes boot ok, i'm going to check I booted fromt the new BE and also that my cluster been updated:

[root@vm3:/]# lustatus (10-04 19:24)
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
s10x_u9wos_14a yes no no yes -
BEclus32 yes yes yes no -

[root@vm3:/]# cat /etc/cluster/release (10-04 19:26)
Sun Cluster 3.2u3 for Solaris 10 i386
Copyright 2009 Sun Microsystems, Inc. All Rights Reserved.
[root@vm3:/]#

[root@vm2:/]# clnode show-rev (10-04 19:39)
3.2u3

Ok looks good...

[root@vm3:/]# clnode status (10-04 19:27)

=== Cluster Nodes ===

--- Node Status ---

Node Name Status
--------- ------
vm3 Online
vm2 Online
vm1 Online

ok, so now I can finally change the nofencing parameter:

[root@vm2:/]# cluster set -p global_fencing=nofencing (10-04 19:41)
Updating shared devices on node 1
Updating shared devices on node 2
Updating shared devices on node 3

Once i'm ready I can try and share the iscsi luns again from the host,once shared I scan from the 3 servers:

root@x4200m2 # zfs set shareiscsi=on vbox/iscsi-vol1

[root@vm3:/]# devfsadm -c iscsi

check I see the lun ok from all of them:

[root@vm3:/]# echo | format (10-04 22:21)
Searching for disks...done

AVAILABLE DISK SELECTIONS:
0. c0t0d0
/pci@0,0/pci8086,2829@d/disk@0,0
1. c0t2d0
/pci@0,0/pci8086,2829@d/disk@2,0
2. c1t3d0
/iscsi/disk@0000iqn.1986-03.com.sun%3A02%3A7d737790-6343-c0c9-8e4d-994c6082bee20001,0
Specify disk (enter its number): Specify disk (enter its number):

And then populate the DID database with the new disk:

[root@vm3:/]# cldev populate (10-04 22:21)
[root@vm3:/]# cldev status (10-04 22:21)

=== Cluster DID Devices ===

Device Instance Node Status
--------------- ---- ------
/dev/did/rdsk/d1 vm3 Ok
/dev/did/rdsk/d10 vm3 Ok
/dev/did/rdsk/d12 vm1 Ok
/dev/did/rdsk/d3 vm2 Ok
/dev/did/rdsk/d5 vm1 Ok
vm2 Ok
vm3 Ok
/dev/did/rdsk/d8 vm1 Ok
/dev/did/rdsk/d9 vm2 Ok

We finally have our shared disk working, on the 3 nodes it is attached too.

Unix Systems: 

Comments

You saved my day with the awk error on scinstall scrtip. This it not even documented in Oracle's website.

Thanks a lot for sharing!

No problem, thanks for sharing your toughts

Add new comment

Filtered HTML

  • Web page addresses and e-mail addresses turn into links automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <blockquote> <code> <ul> <ol> <li> <dl> <dt> <dd>
  • Lines and paragraphs break automatically.

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
By submitting this form, you accept the Mollom privacy policy.
Error | HP-UX Tips & Tricks Site

Error

Error message

  • Warning: Cannot modify header information - headers already sent by (output started at /homepages/37/d228974590/htdocs/includes/common.inc:2567) in drupal_send_headers() (line 1207 of /homepages/37/d228974590/htdocs/includes/bootstrap.inc).
  • PDOException: SQLSTATE[42000]: Syntax error or access violation: 1142 INSERT command denied to user 'dbo229817041'@'217.160.155.192' for table 'watchdog': INSERT INTO {watchdog} (uid, type, message, variables, severity, link, location, referer, hostname, timestamp) VALUES (:db_insert_placeholder_0, :db_insert_placeholder_1, :db_insert_placeholder_2, :db_insert_placeholder_3, :db_insert_placeholder_4, :db_insert_placeholder_5, :db_insert_placeholder_6, :db_insert_placeholder_7, :db_insert_placeholder_8, :db_insert_placeholder_9); Array ( [:db_insert_placeholder_0] => 0 [:db_insert_placeholder_1] => cron [:db_insert_placeholder_2] => Attempting to re-run cron while it is already running. [:db_insert_placeholder_3] => a:0:{} [:db_insert_placeholder_4] => 4 [:db_insert_placeholder_5] => [:db_insert_placeholder_6] => http://www.hpuxtips.es/?q=node/302 [:db_insert_placeholder_7] => [:db_insert_placeholder_8] => 54.198.71.184 [:db_insert_placeholder_9] => 1503242736 ) in dblog_watchdog() (line 157 of /homepages/37/d228974590/htdocs/modules/dblog/dblog.module).
The website encountered an unexpected error. Please try again later.