[University of Arkansas][Computing Services]

Disaster Recovery Plan
Tape Restore Process

Last update: Tuesday, 21-Mar-2000 10:36:02 CST

This document is intended to provide a brief overview of the ufsrestore utility as it will be used to restore file systems to newly prepared disk slices. In the event of an emergency such as a disk failure or as part of a Disaster recovery process, the chief task is to rebuild a system. Assuming that new hardware has been acquired and a fresh OS install has taken place, a replication of the previous file systems, 'slices' in SunOS terms, should have been accomplished. This process should provide the following:

1. File systems with names that match the previous setup.

2. File systems that existed as whole slices are dedicated in the same manner on the new disk.

3. Disk capacities match or exceed those of the old system.

These steps must be planned for and achieved during the OS install. It is acceptable that the number of drives and/or slices may vary, and, that there could be additional needs for the new machine. It is only crucial that those file systems to be restored have compatible slices on the newly prepared machine.

Backups were made using the ufsdump utility. During this process entire disk slices are dumped to tape. Most of the file systems created during the install of the OS need not be restored. Only a few select files will need to be restored from tape for these file systems. For other large file systems such as those containing many user accounts and entire slice may be restored to recreate these accounts en-masse.


For the cavern system and alumni as well, a script is executed from the cron facility on a weekly basis that dumps all of the slices of all disks to tape. The script sets up and runs ufsdump. A look at this script is below.

The script uses the Mag Tape command 'mt' to rewind the tape.

# rewind the tape, just to be sure

echo "Rewinding tape...\c"

mt -f ${TAPE} rewind

echo "OK."

Other variables are set or picked up from the command line relating to the blocking factor for the drive and the level of backup being done. We use this to do only entire backups. The following loop cycles through the slices in order to dump them to tape.

# Just add slice device names to this list in order to get them 
# backed up. You MAY NOT insert a newline here. 
for SLICE in c0t3d0s0 c0t3d0s3 c0t1d0s2 c0t5d0s2 c0t4d0s3 
echo "Dumping ${SLICE} to ${TAPE}..." 
ufsdump ${LEVEL}ubf ${BLKFACTOR} ${TAPE} /dev/rdsk/${SLICE} 
echo "" 

The order of the file systems as they exist on the backup tape relates to the order in which they are placed on the line above which begins with 'for SLICE'. So that during the restore process the backup script can be referenced to determine the backup order. This is necessary in order to position the tape to the desired slice prior to running the restore command. A comparison of the output of the command 'df -k' and the backup script will reveal which file systems were contained on which slices.

The output of the 'df' command is displayed below. Although the actual controller, target, disk and slice components of the new disks may not exactly match the old ones. So long as the capacities can accommodate the old file systems these file systems can be restored to the new slices.

Filesystem            kbytes    used   avail capacity  Mounted on
/dev/dsk/c0t3d0s0      48023   13488   29735    32%    /
/dev/dsk/c0t3d0s6     288855  204672   55303    79%    /usr
/proc                      0       0       0     0%    /proc
fd                         0       0       0     0%    /dev/fd
/dev/dsk/c0t3d0s3     492351  171424  271697    39%    /var
/dev/dsk/c0t5d0s2    1952573 1066630  690693    61%    /export/home/services
/dev/md/dsk/d0       3894490 1684740 1820310    49%    /export/home
/dev/dsk/c0t3d0s4     480919  345665   87164    80%    /export/home1
/dev/dsk/c0t3d0s5     480919       9  432820     1%    /export/home2
/dev/dsk/c0t4d0s4     985446  460738  426168    52%    /opt
/dev/dsk/c0t4d0s5     966382  720518  149234    83%    /opt1
swap                  188392    5220  183172     3%    /tmp


Although the backing up is done from a script by necessity the restoration process must be carefully considered and probably done manually. It would be possible to put together a script to facilitate the restore process, but is just a simply accomplished by manually running the commands. The steps required to manually restore files and whole slices are outlined here.

Selecting the dump file that contains the files, directories or file systems.

1. Reference the backup script to determine the backup order for the original system. Compare this to the file system layout on the new disks. Scan forward to the desired file system. For example if the third file system is c0t1d0s2 and this slice contains the needed file systems to advance to dump file three:

The command: mt -f /dev/rmt/0cn asf 3

This will position the tape to the third dump file. Now to extract files and or dirs the following commands apply.

The command: ufsrestore xf /dev/rmt/0cn filename

Where filename is either the name of a single file or a directory will retrieve the named item as long as it is contained on the dump file.

IMPORTANT! ---> the file or directory will be deposited in the current working directory.

Its sometimes advisable to run this command from with in the /tmp dir for single files or dirs that aren't very large. They can then be moved into place once they have been pruned and/or edited. For an entire slice containing many meg of directories and files it will probably be more convenient to dump directly to the new slice designated to hold the backup material.

An alternate method of selecting and restoring individual files/dirs might be done in the following manner. As above advance the tape to the dump file that contains the desired material. Now run ufsrestore in interactive mode using the command below.

The command: ufsrestore i /dev/rmt/0cn

Run in this way ufsrestore presents the user with a prompt tat looks like the one below.

ufsrestore >

From this prompt the user can traverse the directories stored in the dump file. To do this use the 'cd' command as you would at a normal unix prompt. To view the files listed therein use the 'ls' command. A list of available commands can be examined by entering the '?' command. The 'add' command is used to select a file or directory to be added to a list of files that will be extracted.

ufsrestore > ls

directory1/ directory2/

afile anotherfile

To add one to the list:

ufsrestore > add afile

Finally when all files/dirs have been selected the extract process can begin simply by entering the 'extract' command.

ufsrestore > extract

Remember that the selected files/dirs will be placed in the current working directory. Plan for this before you begin by changing to the proper directory before starting the ufsrestore process.

There are many more elements to both the ufsdump and ufsrestore commands. The man pages for these cover the details extensively. For the purpose of this document though, the procedures outlined here should be sufficient to allow for a flexible and convenient restoration of any amount of archived material.

The cavern & alumni systems

In order to perform restores to these systems the above procedures will work effectively. Some considerations to be noted at this time are however.

1. cavern, apsara and alumni are currently stored on one 8mm tape. Eventually they may span multiple tapes. If this occurs, a large file system could begin one tape one and end on tape two.

2. Optionally some material may be restored from ADSM backups.

3. Both cavern and alumni may be included into a backup scheme currently used for comp that uses the robotic HSM unit to write to 8mm tapes. At this time I don't know for sure if the restore methods will remain the same.

In any case it is planned to continue to perform backups to the 8mm unit even if HSM backups are done. It will always be convenient to have local backup volumes readily at hand for the more mundane restore needs that may arise during normal system operation.

[Home Page] [Table of Contents] [Send Mail]
Copyright © 1997 University of Arkansas
All rights reserved