ADSM-L

Re: ADSM and big file-servers

1999-03-16 10:04:42
Subject: Re: ADSM and big file-servers
From: Nathan King <nathan.king AT USAA DOT COM>
Date: Tue, 16 Mar 1999 09:04:42 -0600
You are correct. I have pasted the following from the server.txt in the
readmes for the ADSM NT Server. IBM appear to be well aware of the
performance problem.


 2.4 Performance issues
  ----------------------

    There are a large number of variables that can effect NT and ADSM
    for Windows NT performance.  Improving performance usually involves
    performing a detailed examination of key system components to
    determine where the bottlenecks are on both the ADSM server and the
    ADSM client machines.

    Performance monitor, which ships with Windows NT, is an excellent
    tool to help determine where system bottlenecks are.

    In general, adding RAM, using the fastest available SCSI disk drives,
    SCSI controllers, network interface cards, and CPUs as well as
    SMP machines will all improve the performance of NT and ADSM for
    Windows NT.

    Next to system hardware, the number and size of client files has a
    big impact on ADSM performance.  The performance when tranferring
    small files is lower than when transferring large files.

    For example, using a 133MHZ pentium, FAT file system, local ADSM
    client with no compression, and named pipes to a disk storage pool
    throughput has been measured at:

                                     File Size
    Throughput          1KB      10KB    100KB     10MB
    KB/Sec                57       445     1280     1625
    GB/hr               0.20      1.53     4.39     5.58

    A simple test using ftp to transfer a set of files from the ADSM
    client machine to the ADSM server machine can provide a reference
    as to the network performance between two machines.  In general,
    when using ADSM to transfer the same set of files ADSM should not
    perform much differently than FTP.

    The choice of file systems also comes into play with regard to
    performance. NTFS compressed drives should not be used to hold
    ADSM disk volumes (database, recover log, or disk storage
    volumes).  On the client side, backing up NTFS data can be
    significantly slower than backing up FAT data. On the server side,
    use of NTFS is also slower than using FAT.  The tradeoff is that
    NTFS has security and is more robust than the FAT filesystem.

    There are many good sources of information regarding NT performance
    improvement. The Windows NT resource Kit provides good coverage of
    of the use of performance montitor for example as does a number
    of other NT related books.

        -----Original Message-----
        From:   McAllister Craig-WCM033
[SMTP:Craig_McAllister AT EUROPE36.MOT DOT COM]
        Sent:   Tuesday, March 16, 1999 8:40 AM
        To:     ADSM-L AT VM.MARIST DOT EDU
        Subject:        Re: ADSM and big file-servers

        All, 
                I feel I should provide a contrary opinion on this one. I
too have
        big NT servers (3 at 105Gb each), and I've had very few problems
with
        backing them up - or restoring from them. This may be because the
files I
        have tend to be larger, and therefore there are fewer of them (we
have one
        really busy server with 83Gb made up of 600,000 files). Our network
is FDDI
        100Mbps. I believe the problem is more with number of files rather
than the
        size of the individual files. Maybe ADSM develpoment could take this
and
        work out some way of traversing the driectory structure under NT in
a more
        efficient way. Then again, maybe not. (Hands tied by Microsoft's
poor
        programming...)

                -Craig.


        -----Original Message-----
        From: Francis Maes [mailto:fr.maes AT CGER DOT BE]
        Sent: 16 March 1999 11:10
        To: ADSM-L AT VM.MARIST DOT EDU
        Subject: Re: ADSM and big file-servers


        Hello Stephan,

        We are facing the same problem. Our "big" file servers (NT) are
using 100GB
        disks partitions (filespaces for ADSM).
        This kind of "big" filespaces gives two kinds of problems with ADSM:
        1) The process time on the client. A 100GB filespace with user data
may
        contain up to 4 millions of files.
            ADSM Client (NT) 3.1 takes easely 6 to 8  hours only to scan 4
millions
        files. (Bi-processor NT server with big memory)
        2) The transfert time, in case of restore is too long.
            We are using an ATM between our "big" (NT) clients and our (MVS)
server.
        The top speed is +/- 4GB / hour => a minimum of 25 hours for 100 GB.

        It is a long time that I claim by IBM for improvements on that
subject.

        For me, a 20GB filespace is a maximum for ADSM like it is now.

        Be sure the next release will be better.......

        Best regards,

        Francis

        
_______________________________________________________________________
        Francis Maes                    ASLK-CGER Services GIE  - Belgium
        ADSM Server Administrator Rue Fossé-aux-Loups, 48 - 1000 Brussels
        Storage Management          E@Mail: fr.maes AT cger DOT be



        -----Message d'origine-----
        De : Stephan Rittmann <srittmann AT FIDUCIA DOT DE>
        À : ADSM-L AT VM.MARIST DOT EDU <ADSM-L AT VM.MARIST DOT EDU>
        Date : vendredi 12 mars 1999 14:09
        Objet : ADSM and big file-servers


        >
        > Hi all,
        >
        >I want to start a discussion about ADSM and backing up big file
servers. In
        our
        >environment we have 16 Mbit token-ring networks and we are using
ADSM to
        back
        >up all of the critical data.
        >The biggest file servers that we use at the moment has a 18 GB data
        partition.
        >Backing up these servers with incremental backup is no problem. It
works
        for a
        >long time, everybody is satisfied about the short backup times. But
what
        will
        >happen  in the case of a disk failure. If the server was very full,
you
        have to
        >restore up to 18 GB. With our kind of netwotk this would take about
20
        houres
        >or more.
        >What I want to say is; The disks in the servers  become bigger and
bigger,
        the
        >backup time is still the same because of the incremental technique
from
        ADSM.
        >I'm sure that most of the useres from ADSM don't think about the
long
        restore
        >times in case of a disk failure.
        >The difference between the network speed and the size of the data
disks
        becomes
        >bigger and I see a problem in this fact.
        >
        >What are you think about these? And how could we solve these
problem?
        >
        >Stephan Rittmann
        >FIDUCIA AG, Karlsruhe
        >Germany
        >
        >
<Prev in Thread] Current Thread [Next in Thread>