ADSM-L

Re: 3590/9840 debate....PLEASE REPLY

2000-04-28 18:53:16
Subject: Re: 3590/9840 debate....PLEASE REPLY
From: Joe Faracchio <brother AT SOCRATES.BERKELEY DOT EDU>
Date: Fri, 28 Apr 2000 15:53:16 -0700
Been running 3590s (20 Gig native) since November on an AIX-ADSM with
two drives in a 3494 and the problems we've had were all recoverable.

I've changed the cleaning tape threshold in the 3494 down to 40 mounts
and that helps.  (Is there a way to get ADSM to manage that or is giving
it to the 3494 a better idea or at least OK?)

 My CE has upgraded the micro code to not be as sensitive
to temp errors so that it stopped kicking out tapes that were perfectly
good (I'd do a move data and then set them aside but after 7 or 8 I had
the CE upgrade the microcode and have since recycled the tapes without
problems.)_   ... knock wood!

And I beat the hell out of my system!  I run COPYPOOL reclamation on
all weekends with REC=20 and make sure I get at least 15 tapes redone.
I reclamate the TAPEPOOLs during the week at about 20 too and get one or
two tapes reclam'd ever day.  I run collocation so I have couple hundred
mounts every day.  I have a 20 GIG DB and take in from 5 - 20 Gigs daily.

                        joe.f.

Joseph A Faracchio,  Systems Programmer, UC Berkeley


On Fri, 28 Apr 2000, Burton, Robert wrote:

> We currently are running a 3494 tape library with 12 3590 drives....
> We are in the process of trying to make a decision on whether to keep with
> our current
> tape media or switch to a STK library with 9840's...
> Recently we have been bitten by 3590 READ errors during our copy pool
> processing which we could not recover with a move data...
>
> If you would be so kind I would like to get a reply back from 3590 and 9840
> users as to any media problems
> that you have experienced that have caused you to lose data....
>
> thanks fellow ADSM/TSM'ers
> Robert Burton
> Open Systems Storage Analyst
> Royal Bank of Canada
> (416) 348-3849
> Robert.Burton AT RoyalBank DOT com
>
<Prev in Thread] Current Thread [Next in Thread>