My experience so far has been that the restore is limited by the speed and configuration of the server. We had to restore an NT server (12 GB, P5-233 with 512MB RAM) over 100 Mb ethernet. This ran for over 26 hours, never used more than 25% of the network, and query session on the ADSM server showed long periods of send wait. The client data was not compressed -- it juscouldn't keep up on the disk write side.
FWIW, I see the same thing in the AIX environment; I can shove the data off the server and over the network far faster than the client can actually write it.
Tom Kauffman
kauffmant AT nibco DOT com
-----Original Message-----
From: Stephan Rittmann [SMTP:srittmann AT FIDUCIA DOT DE]
Sent: Friday, March 12, 1999 7:59 AM
To: ADSM-L AT VM.MARIST DOT EDU
Subject: ADSM and big file-servers
Hi all,
I want to start a discussion about ADSM and backing up big file servers. In our
environment we have 16 Mbit token-ring networks and we are using ADSM to back
up all of the critical data.
The biggest file servers that we use at the moment has a 18 GB data partition.
Backing up these servers with incremental backup is no problem. It works for a
long time, everybody is satisfied about the short backup times. But what will
happen in the case of a disk failure. If the server was very full, you have to
restore up to 18 GB. With our kind of netwotk this would take about 20 houres
or more.
What I want to say is; The disks in the servers become bigger and bigger, the
backup time is still the same because of the incremental technique from ADSM.
I'm sure that most of the useres from ADSM don't think about the long restore
times in case of a disk failure.
The difference between the network speed and the size of the data disks becomes
bigger and I see a problem in this fact.
What are you think about these? And how could we solve these problem?
Stephan Rittmann
FIDUCIA AG, Karlsruhe
Germany
------_=_NextPart_001_01BE6C8E.6E4F1F2C--
=======================================================================
|