I see this is 3494 w/ 3590's
Now our big DB, 1 TB but comes in as about 275 GB due to client
compression.
Our critical SAP production DB's all have their own diskpools with
enough dasd to hold an entire day's archives and archive redo logs.
(the one above has a 280 GB diskpool with cache turned on)
BUT 'cause the data is already compressed we only fit 9.98-ish GB per
3590 tape so in a sense it provides (by the volume of data) a form of
collocation. Each day's archives suck up about 28 tapes. The client
has about 231 file spaces. This makes an average of 8 file systems
per tape. If we run 32 concurrent retrieve processes then "on the
average" we will have 4 pulling from tapes and 28 waiting on media.
(oh, we only run with 4 3590's per adsm server except where people
really whined and forked over $$$$)
We archive 1 TB in about 10 hours and retrieve it in about 12 hours
(the time increase on the retrieve is due to writing the "mirror" copy
over on the client) Tape retrieves are not much worse... I don't
think I'd still be here if they ever hit over 24 hours...:-(
My offerings :
1) client compression
2) run way more concurrent retrieve/restore processes than you have
resources... this ensures full utilization of hardware even though
some sessions might be in media wait.
later,
Dwight
______________________________ Reply Separator _________________________________
Subject: Re: A NEWBIES perspective - ADSM and big file-servers - an
Author: lipp (lipp AT STORSOL DOT COM) at unix,mime
Date: 3/16/99 9:14 PM
Stephen,
How many restore streams were you running? 300 GB in 35 hours? That's 8-9
GB/hour. How fast did your customer think you should do it? With a single
tape drive and GFS, I don't think you would have been any faster.
Collocation at the filespace level is the only kind of collocation that
will really help you on this, I would think, and then only if you have
multiple files spaces.
Kelly J. Lipp
Storage Solutions Specialists, Inc.
www.storsol.com
lipp AT storsol DOT com
(719)531-5926
|