[Computing Services][Technical Services]

Disaster Recovery Plan
DRPCX002: Warehouse System Recovery

Last update: Tuesday, 21-Mar-2000 10:31:26 CST



Instructions in this disaster recovery plan are intended to be of a general nature wherever possible. A moderate amount of expertise in UNIX systems administration, with particular experience in support of Solaris 2.x. If local expertise is unavailable, contract support will be available from Sun Microsystems.


Sun Ultra-Enterprise 3000 or functional equivalent, configured to the following specifications:

 Sun Part #   Qty Description

 E3001        1  Enterprise 3000 Server Base Package, Tower Enclosure,
                 one SunCD4, one Power/Cooling module,
                 Solaris server license, no processors, no memory,
                 no CPU/Memory Boards, no I/O boards

 2600A        1 Enterprise CPU/Memory board

 2500A        2 167MHz UltraSPARC CPU module, 512K external cache

 7022A        2 256MB memory expansion kit

 2610A        1 SBus I/O board

 6591A        1 37.8GB SPARCstorage Array Model 112 with 18
                2.1GB 7200 RPM Fast/Wide SCSI-2 SC disks and
                fiber channel cable

 6206A        1 14GB 8mm internal tape drive

 595A         1 Fiber channel optical module

 X311L        1 U.S./Asia localized country kit (Power cord, etc.)

 X1023A       1 Sun FDDI 4.0 single-attach SBus adapter

 non-SUN      1 VT-100 Compatible ASCII terminal with null-modem cable
                for use as system console.

Note: Part numbers are taken from the July 23, 1996 version of
"Sun Microsystems Computer Company U.S. End User Price List".

Note: This configuration differs significantly from the current
production Data Warehouse server.  The SPARCcenter 
2000 chassis is withdrawn from marketing.  The closest approximate
configuration and "best bang for the buck" currently
available is a modestly configured Enterprise 300.


1.  Sun Microsystems (Hardware, Operating System, and general

  Tim Simmons (Account Rep)
  Sun Microsystems Computer Corp.
  5865 Ridgeway Center Parkway
  Suite 300
  Memphis, TN  38120
  (901) 763-3964 (voice)
  (901) 680-9951 (fax)
2.  SAS

 SAS Institute, Inc.
 SAS Campus Drive
 Cary, NC  27513
 (919) 677-8003 (Customer Service)
 (919) 677-8008 (Technical Support)


  1. Solaris 2.5.1 (or later) server media kit. This list assumes Solaris 2.5.1

  2. Sun language products installation CD-ROM. Current version is

    "SunSoft WorkShop Volume 4, Number 2". Required products are:

  3. Supporting code and drivers for the SPARCstorage array. Current version is

    "SPARCstorage Array Software and SPARCstorage Volume Manger 2.1.1"

    (The current version of this software should ship with new SPARCstorage Arrays)

  4. Supporting code and drivers for the Sun FDDI SBus Adapter 4.0. The current version of this CD should ship with new FDDI adapter(s).

  5. Current install media for SAS. Obtain this from the vendor. At present, we are running SAS 6.11.


The Ultra-Enterprise 3000 will require one standard 20A/120V single-phase power feed for the system itself. An additional power connection for the system console and SPARCStorage Array must be provided. A single additional four-outlet 20-amp circuit should suffice.

The hardware list in section I assumes that the system will be attached to the campus network via FDDI. If this is not the case, then each 2610A SBus I/O board provides a single 10/100-BASE-T EtherNet attachment port. Obviously a network connection of some sort has to be available before the system can be brought on-line to the network…


Boot volume (install defaults to first drive of first SSA):

/   -  Slice 0 -  64MB
/usr  -  Slice 6 -  450MB
/opt  -  Slice 5 -  512MB
/export/home -  Slice 7 -  394MB
/var  -  Slice 3 -  256MB  (temporary - moves to 3G SSA volume)
swap  -  Slice 1 -  350MB

The initial boot volume configuration assumes that you're installing a complete boot-able system on a single SSA disk. In the current configuration, the boot volume is not encapsulated for management by the SSA volume manager, although this can be changed if enough disk is available to mirror the boot volume during recovery. Refer to the volume manager documentation if you want to consider encapsulating the boot disk.

File system layout for "system" file systems should follow the layout recommended for comp.uark.edu.

After installation of the operating system, the /var partition has to be enlarged in order to provide sufficient disk space for logging, auditing, and accounting data as well as to provide enough temporary space under /var/tmp for regular use. To move the /var partition to this configuration, create a suitable striped volume through the volume manager interface (see the Volume Manager documentation). Then,

Create other SSA volumes as follows:

 /export/home1  -  RAID-5, size 2500MB
 /export/warehouse  -  RAID-5, size at least 15GB
 /opt1/sastmp  -  striped, size 1500MB (not RAID-5 due to performance

An additional 1GB disk slice should be allocated on another SSA disk for use as swap space.

Run 'newfs' against each logical volume to lay down new file systems. Current file systems were created with

comp# newfs -m 1 -c 32 /dev/vx/rdsk/volname
It may be desirable to create mount points for '/opt1/pub' from comp.uark.edu in order to NFS mount shared programs from that system. This doesn't necessarily have to be done at recovery time, but should be done prior to bringing the new warehouse server into production.


Off-site backup procedures for warehous.uark.edu remain to be determined. No off-site backups are maintained at present due to lack of funds for supporting hardware. Current recovery plans call for data reconstruction of the data warehouse contents from on-line and historical data derived from the University's administrative MVS/ESA system.


  1. Obtain new equipment as described in Section I.
  2. Assess "physical plant" facilities at the cold site; verify that correct power and data connectivity is in place (two 20A/120V power feeds; ethernet, fast ethernet, or FDDI; air conditioning, rack or table space, cables, etc).
  3. Connect the new system, storage arrays, and network.
  4. Make sure you know the IP address and domain name assigned to the new system before proceeding.
  5. Complete any vendor-prescribed pre-installation diagnostics.
  6. Install Solaris and SSA support code per instructions.
  7. Create SSA volumes and file systems. (Refer to Section IV).
  8. Create enlarged /var partition (Refer to Section IV for detailed instructions).
  9. Install the Sun C compiler.
  10. Install SAS.
  11. Create '/export/warehouse' and any other remaining file systems.
  12. Re-create user definitions and data as deemed appropriate by the Data Warehouse applications group.
  13. Enable access for the Data Warehouse applications group to the system so they can begin the process of reconstructing contents of the warehouse from data resident on the MVS/ESA system.
  14. As soon as possible, re-implement routine backups. It should be possible to proceed with normal operations at this point.

It's difficult to offer a precise estimate of time required for "cold recovery" of warehous.uark.edu under these circumstances. Once new hardware is received, a minimum of three working days should be expected. The time required will vary according to the ability of the person performing system recovery and the speed of the tape subsystem used to perform restores.

[Home Page] [Table of Contents] [Send Mail]
Copyright © 1997 University of Arkansas
All rights reserved