ADSM-L

Re: Useful 'move data' options

1998-07-17 13:56:19
Subject: Re: Useful 'move data' options
From: "Weeks, Debbie" <debbie AT ADMIN.USF DOT EDU>
Date: Fri, 17 Jul 1998 13:56:19 -0400
We also have and like DRM, but as far as I know the entire copypool goes
to vault.  You can't seperate out just active versions.

On a similar issue, we still "lose" tapes at the vault.  They apparently
go into VAULTR status during the week, but are deleted from the
inventory before we pull the tapes weekly on Monday.  So we manually
check our vault inventory monthly now, and I always find at least 20
tapes that have disappeared from ADSM and can be checked back in as
scratch.

> -----Original Message-----
> From: Jackie Balboni [SMTP:JBALBONI AT OCEANSPRAY DOT COM]
> Sent: Friday, July 17, 1998 12:11 PM
> To:   ADSM-L AT VM.MARIST DOT EDU
> Subject:      Re: Useful 'move data' options
>
> We use DRM for our copy storage pool. Reclamation can be done against
> these tapes and the will go in vault retrieve and can then go to
> scratch.
>
> >>> "Weeks, Debbie" <debbie AT ADMIN.USF DOT EDU> 07/17 11:21 AM >>>
> We only have backup activity at night at this time, all to diskpool,
> so
> I backup my diskpool to copypool every morning prior to migration for
> efficiency (like you say).  So,  can you take it a little further?
> How
> would I track just active versions beyond that?  Eventually the
> offsite
> tapes would contain (as they do now) a mix of active and inactive
> versions, with no way of expiring inactive versions on copypool only.
>
> I realize that a lot of installations require a copy of the entire
> storage pool to ensure recovery from media failure, but not all sites
> find that insurance worth the price of the tapes.  I think I have
> expressed this to this list before, so I won't bore you with my
> whining
> about this again.  But if there is an easy, viable way to achieve this
> with ADSM, I would like to hear it.
>
> > -----Original Message-----
> > From: Bill Colwell [SMTP:bcolwell AT DRAPER DOT COM]
> > Sent: Friday, July 17, 1998 10:30 AM
> > To:   ADSM-L AT VM.MARIST DOT EDU
> > Subject:      Re: Useful 'move data' options
> >
> > In <1A16375E84ABD111AF2200203568362F30AF AT calypso.cfr.usf DOT edu>, on
> > 07/17/98
> >    at 09:15 AM, "Weeks, Debbie" <debbie AT ADMIN.USF DOT EDU> said:
> >
> > >I agree!  This type of function might also enable me to only send
> the
> > >active versions offsite, something we are truly interested in.
> >
> > >> -----Original Message-----
> > >> From: Hilton Tina [SMTP:HiltonT AT TCE DOT COM]
> > >> Sent: Friday, July 17, 1998 8:16 AM
> > >> To:   ADSM-L AT VM.MARIST DOT EDU
> > >> Subject:      Re: Useful 'move data' options
> > >>
> > >> I think another handy addition to either mov data or reclamation
> > would
> > >> be to move all the active files to a set of tapes, separate from
> > the
> > >> inactive files.  It that was done every so often it would reduce
> > the
> > >> number of tapes needed to restore a node.  I've had some people
> try
> > to
> > >> get me to schedule full backups to accomplish this, but I've been
> > able
> > >> to refuse so far.
> > >>
> > >> Anyone else agree?
> > >>
> >
> > Slow down and and think this thru a little bit.  To make copypool
> > tapes as
> > efficiently as possible you should be backing up the disk
> storagepool
> > frequently.  I do it 4 times a day.  Since backup storagepool is an
> > incremental process it only takes the new files which are all active
> > versions.
> >
> > Regarding enhancing move data, I know IBM already has a user
> > requirement for it because I submitted one at a GUIDE conference
> more
> > than 3
> > years ago.  Unfortunately it was returned as a suggestion. Perhaps
> > someone
> > at IBM could look up the number so others
> > can concur on it.  This is still the best procedure isn't it?
> >
> > --
> > -----------------------------------------------------------
> > Bill Colwell
> > C. S. Draper Lab
> > Cambridge, Ma.
> > bcolwell AT draper DOT com
> > -----------------------------------------------------------
<Prev in Thread] Current Thread [Next in Thread>