Veritas-bu

[Veritas-bu] tape usage

2003-10-06 12:02:38
Subject: [Veritas-bu] tape usage
From: Sen" <discussion.groups AT gmx DOT net (Sen)
Date: Tue, 7 Oct 2003 00:02:38 +0800
its a P1000 library
how can i restrict the backup to write to both tapes fully before loading
other free tapes?

----- Original Message ----- 
From: "Donaldson, Mark" <Mark.Donaldson AT experianems DOT com>
To: "'Sen'" <discussion.groups AT gmx DOT net>; "Pearce, Andrew W."
<apearce AT kforce DOT com>; "'Steve Dvorak'" <sdvorak AT veritas DOT com>; 
"netbackup-l"
<netbackup-l AT yahoogroups DOT com>; "nbu-lserv" <nbu-lserv AT dsihost-srv01 
DOT com>;
<veritas-bu AT mailman.eng.auburn DOT edu>
Sent: Monday, October 06, 2003 11:51 PM
Subject: RE: [Veritas-bu] tape usage


> So this is a standalone or stacker drive unit?
>
> If it's ejecting the tape before full and loading the next in line, you
have
> to either have everything at the same retention or use the
> ALLOW_MULTIPLE_RETENTIONS.  You also can use the NO_STANDALONE_UNLOAD
> keyword to keep the tape from being ejected before it's full.
>
> -M
>
> -----Original Message-----
> From: Sen [mailto:discussion.groups AT gmx DOT net]
> Sent: Monday, October 06, 2003 7:39 AM
> To: Pearce, Andrew W.; 'Steve Dvorak'; netbackup-l; nbu-lserv;
> veritas-bu AT mailman.eng.auburn DOT edu
> Subject: Re: [Veritas-bu] tape usage
>
>
> i have 2 drives in the loader.
> 1 job with 2 streams, mutiplexing set to 1
> any help?
>
> ----- Original Message ----- 
> From: "Pearce, Andrew W." <apearce AT kforce DOT com>
> To: "'Steve Dvorak'" <sdvorak AT veritas DOT com>; "'Sen'"
> <discussion.groups AT gmx DOT net>; "netbackup-l" <netbackup-l AT yahoogroups 
> DOT com>;
> "nbu-lserv" <nbu-lserv AT dsihost-srv01 DOT com>;
> <veritas-bu AT mailman.eng.auburn DOT edu>
> Sent: Monday, October 06, 2003 7:42 PM
> Subject: RE: [Veritas-bu] tape usage
>
>
> > How many tape drives to you have in you loader?
> >
> > If you have 4 drives and 4 or more jobs scheduled to fire at once
(without
> > multiplexing) you will pull 4 tapes, one for each available drive.  If
> this
> > is the case, you could try (in order of personal Preference):
> > 1. Grouping the jobs into 1 or 2 policies and change the "Limit Jobs Per
> > Policy" to 1 or 2 accordingly.
> > 2. Staggering your schedule
> > 3. Multiplexing if it is justified
> > 4. Only allow 2 drives at a time for backup
> >
> > Andrew
> > -----Original Message-----
> > From: Steve Dvorak [mailto:sdvorak AT veritas DOT com]
> > Sent: Monday, October 06, 2003 12:07 AM
> > To: 'Sen'; netbackup-l; nbu-lserv; veritas-bu AT mailman.eng.auburn DOT edu
> >
> > Check out the "Allow multiple retensions" capability.
> >
> > Steve Dvorak
> >
> > -----Original Message-----
> > From: Sen [mailto:discussion.groups AT gmx DOT net]
> > Sent: Saturday, October 04, 2003 7:46 PM
> > To: netbackup-l; nbu-lserv; veritas-bu AT mailman.eng.auburn DOT edu
> > Subject: [Veritas-bu] tape usage
> >
> >
> > hi,
> > I have a pool if 10 tapes which is configured for full backup. This
backup
> > only requires 2 tapes, however, if there are 10 tapes in the library it
> will
> > use about 4 tapes.
> >
> > How can i force NB3.4 to continue appending data on those tapes which
> still
> > have free space before moving on the other new tapes ? thanks
> >
> >
> > _______________________________________________
> > Veritas-bu maillist  -  Veritas-bu AT mailman.eng.auburn DOT edu
> > http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
> > _______________________________________________
> > Veritas-bu maillist  -  Veritas-bu AT mailman.eng.auburn DOT edu
> > http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
> > _______________________________________________
> > Veritas-bu maillist  -  Veritas-bu AT mailman.eng.auburn DOT edu
> > http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
> >
> >
>
>
> _______________________________________________
> Veritas-bu maillist  -  Veritas-bu AT mailman.eng.auburn DOT edu
> http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
>
>



<Prev in Thread] Current Thread [Next in Thread>