ADSM-L

Re: Tape drive recomendations

2002-11-09 14:05:42
Subject: Re: Tape drive recomendations
From: DFrance <DFrance-TSM AT ATT DOT NET>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Sat, 9 Nov 2002 00:51:18 -0600
So... thru most of this thread, the upshot is to consider:
1. Avoid cache=yes, rather use large disk pool with migdelay=1 *and* avoid
offsite  reclamation -- maybe defer the offsite reclamation to weekends,
after clearing out the disk pools.
2. Use not one, but TWO tape technologies -- 3590 (J or K) or 9940A's for
onsite;  then LTO or 9940B's for offsite ?!?  (I don't see many customers
going for that -- two different types of tapes, where most just want to
simplify, K.I.S.S. it for the sake of operator procedures!)
3. Larger arrays of ATA-drives, 1 TB for under $10K -- holy cow, we gotta
find a way to deal with that... how's about large FILE libraries on disk
(with or without SANergy), so disk pools migrate to FILE on disk, recently
expanded to allow more than 100 volumes for SANergy support,,, so, exploit
it as onsite, primary sequential pool.
4. If we truly want to configure (random) disk pool sufficiently large to
hold two days worth of data, for fast restore, then we must avoid
reclamation during that period... ah-ha, with this large array of ATA drives
configured for FILE (sequential) pool as nextstg, that would seem to relieve
the offsite-reclamation *and* the tape contention for primary pools.

All the earlier posts of advice still leaves us with slow DR recovery (due
to my server's data spread across many more tapes than would be with
collocation -- but wait, the truly mission-critical stuff usually comes in 3
flavors: the dbf's for the data base, the logs to recover up to most recent
sweep of redo-logs, and the binaries & config files (the OS flat files); the
dbf's are full backups daily, so it's really just tracking down the
collocated logs & dbf's, and non-collocated OS backups. What's a person to
do?  Collocation is not a good answer; I suspect grouping nodes into
app-based pools (not critical vs. prod. vs. dev/qa) so the critical data
gets spread among enough tapes so that restores of multiple, critical nodes
don't experience tape contention (due to non-critical nodes' data separating
them out).

Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:don_france AT att DOT net

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-----Original Message-----
From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU]On Behalf Of
Rushforth, Tim
Sent: Thursday, October 31, 2002 8:04 AM
To: ADSM-L AT VM.MARIST DOT EDU
Subject: Re: Tape drive recomendations


And the issue is still there if you don't use CACHE=YES but don't completely
clear your backup pool.

We have more disk in our storagepool than is required for one night's
incremental - so we thought keeping some backups on disk was a good thing
(why migrate to tape if you don't need the space).


Tim Rushforth
City of Winnipeg

-----Original Message-----
From: Bill Boyer [mailto:bill.boyer AT VERIZON DOT NET]
Sent: October 31, 2002 9:01 AM
To: ADSM-L AT VM.MARIST DOT EDU
Subject: Re: Tape drive recomendations

Be careful of your copypool reclamations with the disk cache turned on!!
There is a BIG performance hit on reclamation when the primary copy of the
file is on a DISK direct access storage pool. Then the MOVESIZETHRESH and
MOVEBATCHSIZE values are thrown out the window and the files are processed
one at a time.

What I've done to relieve the restore times is to not MIGRATE the disk pools
until the end of the day. That way restoring from last night is quick. I had
a client where they wanted CACHE=YES on a 60GB disk pool. The offsite
copypool reclamation ran for 2-days! Changed it so that migration started at
5:00pm and nobody complained about restore times.

Bill Boyer
DSS, Inc.

-----Original Message-----
From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU]On Behalf Of
Steve Schaub
Sent: Thursday, October 31, 2002 5:59 AM
To: ADSM-L AT VM.MARIST DOT EDU
Subject: Re: Tape drive recomendations


This is one reason I am looking into some of the new, cheaper ATA-based
disk arrays.  98% of restores come from the last few versions, so if you
can size the diskpool large enough (and turn caching on) that you rarely
need to go to tape, restores scream.  Some of the initial prices I am
seeing are < $10k per TB.  It's not SSA, but for a diskpool it might be
fast enough.

-----Original Message-----
From: asr AT UFL DOT EDU
Sent: Wednesday, October 30, 2002 10:15 PM
To: UFL.EDU.asr; VM.MARIST.EDU;.ADSM-L
Subject: Re: Tape drive recomendations
2

=> On Wed, 30 Oct 2002 16:42:14 -0600, "Coats, Jack"
<Jack.Coats AT BANKSTERLING DOT COM> said:

> From my fox hole, LTO works great, but in some ways it is 'to big'.
> The spin time on the tapes is measured as about 3 minutes to rewind
> and unmount a tape.  Meaning if you have to scan down a tape to
> restore a file it can be a while.  Very fast tapes tend to be small,
> so it is a real tradeoff.

> Speed of restore is starting to be a factor here and I have seen
> several posts where that is becoming more of an issue at many sites.
> But the architecture of TSM that makes it great, also gets in the way
> of high speed restores, unless you have lots of slots in a large
> library for a relatively small number of clients (co-location and/or
> backup sets - for theses many smaller tapes might be better, but I
> digress).


Our call on this is congealing: Use the LTO for less-often-read storage.
i.e.: copy pools.  If we can have primary pools on 3590s, we can get up
to 60G raw on the -K volumes.  That seems plenty at the moment.

We can use the 200G-raw (coming soon!) LTO volumes for copies, and read
from them correspondingly less often.

LTO drives are, at the very least, a cheap way to increase your drive
count.

- Allen S. Rout

<Prev in Thread] Current Thread [Next in Thread>