ADSM-L

Re[2]: Slow recovery of 4 GB servers

1996-02-15 17:22:17
Subject: Re[2]: Slow recovery of 4 GB servers
From: Christian Moser <Christian.Moser AT SCHERING DOT DE>
Date: Thu, 15 Feb 1996 17:22:17 EST
     Dwight,

     we backup 28 GB uncompressed databases in about 5 hours each night
     (i.e. large objects) via FDDI from a HP-UX Client (using BACKINT
     SAP R/3 ADSM API) to our MVS Server which is connected to the ring
     by a IBM 3172-3. Our network specialists told me, that the ring is
     normally about 16-18% busy with that.

     What do you mean by slow?

     A FDDI Ring which is saturated by ADSM traffic of only
     two machines seems to be suspicious to me.

     Good luck,
     Christian Moser,           Internet: christian.moser AT schering DOT de
     SCHERING AG, 13342 Berlin, Tel.: (030)4681356 Fax: (030)46916719



____________________________ Antwort-Abtrennung ________________________________
Betreff: Re: Slow recovery of 4 GB servers
Autor:  ADSML (INTERNET.ADSML1) bei SNAPI
Datum:    15.02.1996 16:29


     FROM   :
                INTERNET.ADSML1
                ADSM-L AT VM.MARIST DOT EDU

     DATE   :   02/15/96
     SUBJECT:   Re: Slow recovery of 4 GB servers

     Well... traffic does only move at a pace supported by the highway...

     We have a RS/6000 591 ADSM server on a FDDI ring with multiple other
     RS/6000's   EVEN when the only traffic on the ring is a single rs/6000
     backing up the ring becomes saturated...
     I've noticed that ADSM seems to be well tuned to utilize all resources
     available yet be fair and not suck them all from other network traffic
     and/or cpu cycles on a server/(adsm client)
     I haven't done much monitoring yet... I'm going to be ASAP but right
     now so many machines are rolling in daily... Sigh... nothing like job
     security...
     If anyone out there has any numbers on transfer rates they have
     monitored in their environments could you post them...?
     later
          Dwight
<Prev in Thread] Current Thread [Next in Thread>