ADSM-L

Re: ADSM and big file-servers

2015-10-04 17:46:04
Subject: Re: ADSM and big file-servers
From: Francis Maes [mailto:fr.maes AT CGER DOT BE]
To: ADSM-L AT VM.MARIST DOT EDU
Hello Stephan,

We are facing the same problem. Our "big" file servers (NT) are using =
100GB
disks partitions (filespaces for ADSM).
This kind of "big" filespaces gives two kinds of problems with ADSM:
1) The process time on the client. A 100GB filespace with user data may
contain up to 4 millions of files.
    ADSM Client (NT) 3.1 takes easely 6 to 8  hours only to scan 4 =
millions
files. (Bi-processor NT server with big memory)
2) The transfert time, in case of restore is too long.
    We are using an ATM between our "big" (NT) clients and our (MVS) =
server.
The top speed is +/- 4GB / hour =3D> a minimum of 25 hours for 100 GB.

It is a long time that I claim by IBM for improvements on that subject.

For me, a 20GB filespace is a maximum for ADSM like it is now.

Be sure the next release will be better.......

Best regards,

Francis

_______________________________________________________________________
Francis Maes                    ASLK-CGER Services GIE  - Belgium
ADSM Server Administrator Rue Foss=E9-aux-Loups, 48 - 1000 Brussels
Storage Management          E@Mail: fr.maes AT cger DOT be



-----Message d'origine-----
De : Stephan Rittmann <srittmann AT FIDUCIA DOT DE>
De : Stephan Rittmann <srittmann AT FIDUCIA DOT DE>
=C0 : ADSM-L AT VM.MARIST DOT EDU <ADSM-L AT VM.MARIST DOT EDU>
Date : vendredi 12 mars 1999 14:09
Objet : ADSM and big file-servers


>
> Hi all,
>
>I want to start a discussion about ADSM and backing up big file =
servers. In
our
>environment we have 16 Mbit token-ring networks and we are using ADSM =
to
back
>up all of the critical data.
>The biggest file servers that we use at the moment has a 18 GB data
partition.
>Backing up these servers with incremental backup is no problem. It =
works
for a
>long time, everybody is satisfied about the short backup times. But =
what
will
>happen  in the case of a disk failure. If the server was very full, =
you
have to
>restore up to 18 GB. With our kind of netwotk this would take about 20
houres
>or more.
>What I want to say is; The disks in the servers  become bigger and =
bigger,
the
>backup time is still the same because of the incremental technique =
from
ADSM.
>I'm sure that most of the useres from ADSM don't think about the long
restore
>times in case of a disk failure.
>The difference between the network speed and the size of the data =
disks
becomes
>bigger and I see a problem in this fact.
>
>What are you think about these? And how could we solve these problem?
>
>Stephan Rittmann
>FIDUCIA AG, Karlsruhe
>Germany
>
>
<Prev in Thread] Current Thread [Next in Thread>