[University of Arkansas][Computing Services]

Disaster Recovery Plan
APSARA Restore Procedure
(DRPAPS01)

Last update: Tuesday, 21-Mar-2000 10:30:19 CST

The apsara host is the primary WWW server. It also serves as the database server for applications using informix and EDA in support of WWW applications. The cavern user communities filesystems are NFS mounted to this machine in order to publish their content. UARKinfo document tree resides on apsara and is maintain there by the Information Services people. New program development supporting SIS efforts take place on this machine. Future efforts for secure WWW transactions including credit card processing will take place here as well. Primary emphasis here is on security. All connections to www.uark.edu are really occuring on this host as www.uark.edu is a CNAME for apsara.

Refer to DRPWWW50: WWW Server Relationships for infomation on how the various web servers interact.



1	A11-UBA1-9S-064CB	Sun UltraServer-1 model 170; 64MB RAM			$12,297.00
				2.1GB  SCSI-2 disk, Internal SunCD-4
2	X311L			Localized Power Kit - N. American			   N/A

3	X7002A			64-MB Memory Expansion					$2,310.00

4	X5175A			2.1GB Internal SCSI-2 Expansion Disk			$690.00

5	X6202A			14GB 8mm Tape w/50-to-68 pin SCSI cable			$1,935.00

The above prices reflect 40% educational discount.

Power & Space requirements

3 Physical units requiring 3 110Vlt outlets.

Approx. 20" square box CPU other units stack on top. Console also requires approx. 20" footprint.

File system Layout

The primary task in restoring this system is to duplicate the original filesystem. After installing the base operating system, establishing the network connectivety and installing security patches the original contents of these filesystems can be restored in place from tape.


apsara.uark.edu File system layout.

Filesystem            kbytes    used   avail capacity  Mounted on
/dev/dsk/c0t0d0s0      45711   21489   19652    53%    /
/dev/dsk/c0t0d0s6     288855  218432   41543    85%    /usr
/proc                      0       0       0     0%    /proc
fd                         0       0       0     0%    /dev/fd
/dev/dsk/c0t0d0s3     480919   22908  409921     6%    /var
/dev/dsk/c0t0d0s7     310199  160040  119149    58%    /export/home0
/dev/dsk/c0t1d0s3     492351   51079  392042    12%    /export/home1
/dev/dsk/c0t1d0s4     966382  619682  250070    72%    /export/home1/services
/dev/dsk/c0t0d0s5     577695  330405  189530    64%    /opt
/dev/dsk/c0t1d0s0     492351  187532  255589    43%    /opt1
swap                  394480    2104  392376     1%    /tmp
cavern.uark.edu:/export/home 
		    		 3894488 1687512 1817536    49%    /export/home
cavern.uark.edu:/export/home/services 
		    		 1952568 1066360  690952    61%    /export/home/services

The output above from 'df -k' shows how space is allocated accross the two 2 Gig internal disk units. Also, the filesystems mounted from cavern are revealed. In restoring this setup the most important filesystems is /export/home1/services. The other slices could vary in size somewhat but the services filesystem should be created at the size shown above or even greater. The /var filesystem needs to have more space than usually is allocated due to the size of log files and the desire to backup old log files for trend analysis.
Restore Process
To restore this system would be straight forward. Providing good backup tapes exist. The first step is to aquire and assemble equipment that matches the previous system as close as possible. Duplicating the file systems during the OS install to match the original layout. Restoring the user community, mail spool and various files such as the passwd file. Below is an outline of how this process might occur.

To create a replacement system from scratch will involve the following steps each of which will be documented in detail later in this document.


1) Aquire replacement component parts ( as detailed above)
2) Determine suitable site for assembling replacement system.
3) Put together the replacement system. Hardware.
4) Establish base operating system and network capability.
5) Restore file system layout to match old alumni system.
6) Restore directly from tape file systems from previous system.
7) Test and evaluate to see if all systems are in place.

I. Replacement Hardware.
	Follow procedure outline elsewhere for submitting detailed replacement 
	parts request.  Perform all administrative tasks requied to get replacement
	parts expidited from SMCC.

II. Establish New Site.
	Consult Disaster Recovery Coordinator for site assignment.  Submit 
	required space and power requirements.

III. Reconstruct Hardware.
	Upon reciept of hardware, unpack and assemble components at the 
	new site.  Retain all paperwork, shipping info etc.

IV.  Bring up base system.
	1. Establish a base operating system environment to match prior system.  
	   If the new system ships with an OS newer than the previous 
	   system.  Locate and install the system that had been on cavern 
	   previously.  (e.g. Solaris 2.4 vs. Solaris 2.5)  Then do an upgrade 
	   if desired. (See Standard Solaris Install 
	   procedures)

	2. Install patches (current reommended patches)

	3. Install current recommended security software.  As outlined	in 
 	   documentation written by Peter Laws regarding the latest recommended 
	   security steps to establish for a secure Sun system.
	4. Coordinate IP# and DNS entries with NWS to recreate the 'alumni'  
	   hostname.

V.   Recreate filesysems.
	1. Using the preferred restore method (UFSrestore See DRPWWW02) restore 
	   individual filesystems that are not related to system operation 
	   (e.g. /export/home1) such as the user community.  

	2. Restore individual configuration files on a one by one basis.  The 
	   password files /etc/passwd /etc/shadow are examples of this sort of 
	   file.  There will be numerous files of this sort outlined in another 
	   section of this documnet.


Software Inventory
System Software
Solaris 2.5.1 OS			Operating System
ADSM					Incremental backup software
	
Sun compilers
	C, C++
Security
sendmail.8.7.2				Berkley sendmail
tcp_wrappers	 (TCPD)			
pidentd-2.5
fingerd-1.3
rpcbind_1.1

Servers
Netscape commerce server 		WWW
Netscape communications server     	WWW
wwwstat-1.01
gwstat

WAIS	freeWAIS-0.202		 	Full text indexing
Isite					Full text indexing
CSO					E-mail directory
	qi	server
	ph	client

Informix database
	Viewpoint Pro
	ESQLC
	Informix Online Server
	Sapphire
	ISQLperl

EDA/SQL 3.1		Mainframe connectivety middleware

User progs
Pine-3.91		Pine email package
Lynx			Lynx 




[Home Page] [Table of Contents] [Send Mail]
Copyright © 1997 University of Arkansas
All rights reserved