Hi. It looks like you're testing a disaster recovery situation. Right? If
so, should you be concerned with files in the /dev directory? I believe they
are rebuilt everytime the system is rebuilt, so if you reinstall, or recover
from a mksysb, you should have the /dev directory intact.
Regarding your question, are failures marked in the dsmerror.log and/or
dsmsched.log?
Craig Treptow
Sr. Systems Engineer
Equitable of Iowa Companies
909 Locust Street
Des Moines, IA 50309
(515) 698-6726
(515) 698-2726 (Fax)
CATreptow AT equitable-of-iowa DOT com
>>> Matthew Emmerton <Matthew_Emmerton AT AGORAINC DOT NET> 6/2/98 12:29:57 PM
>>> >>>
We're using:
Server -- Version 3, Release 1, Level 0.0
Command Line Backup Client Interface - Version 3, Release 1, Level 0.0
I did a full backup of my system today, along with a full database backup
as well as config files and all that jazz. Then I removed some crucial
files, and started to restore. I managed to get the server up and running,
and remounted the database and all was well.
I inititated the client, and issued a "restore / -subdir=yes" to restore
the entire system. I told the client (when prompted) to 1) overwrite any
existing files and 2) force updates on write-protected files. It then
failed on the restore /dev/SRC.
The entry for /dev/SRC from another working system looks like:
srwxrwxrwx 1 root system 0 May 13 09:53 /dev/SRC
This looks vaguely like a special file. I imagine my problem is caused by
forcing ADSM to restore at all costs. However, this raises an important
issue when we may have to restore a system completely from backup. Is
there a way during restoration for it to mark files as "failed" (just like
the server marks files as "failed" during a backup?) and then review those
failures after?
--
Matthew Emmerton, System Administration
Matthew Emmerton, System Administration
Agora Food Merchants, National IT
+1 (905) 565-4231
|