Veritas-bu

[Veritas-bu] question for you re: one scalar10k library with 20 total drives , one robot, 10-drives = LTO1, 10-drives = LTO2

2003-09-21 15:15:16
Subject: [Veritas-bu] question for you re: one scalar10k library with 20 total drives , one robot, 10-drives = LTO1, 10-drives = LTO2
From: larry.kingery AT veritas DOT com (Larry Kingery)
Date: Sun, 21 Sep 2003 15:15:16 -0400 (EDT)
> NBU 3.4.1, HP-UX 11i masters in a 3-way Veritas ClusterManager
> cluster, the master is also currently the media server, that may
> soon change. Library is an ADIC Scalar 10k currently has 10 - IBM
> magstar LTO(1) drives, we are adding 10 additional drives, but these
> are LTO-2.  Because of the forward incompatibility we will have with
> the LTO2 tapes possibly being used on an LTO(1) drive, which will
> fail and cause frozen tape snarls, it looks like we need to set up
> another storage unit for the new drives, as well as another volume
> pool for the LTO2 tapes. My question is: can anyone describe the
> "locking" mechanism in NBU by which simultaneous requests to the
> same robot which minds two different storage units are managed?  Is
> this a "gotcha"? Is this a "don't do that"?  Will we just get a
> bunch of 219 storage unit unavailable errors or is this supported?
> Has anyone else attempted to do this?  Thanks all!
> 
> 

No problem here.  This is exactly why you have multiple types such as
hcart, hcart2, and hcart3.  Pick a new type for the new drives and
configure new tapes, drives and storage units using the new type.  NBU
will NOT mix tape types and drive types, for example it will not put an
hcart tape in an hcart2 drive.

The only downside is that if all your LTO1 drives are busy, you can't
use an LTO2 drive to read an LTO1 (written) tape, even though the LTO2
drive could be used to read the tape (I think -- I'm assuming that
LTO2 can read LTO1, if not substitute DLT7000 and DLT8000 in this
example).  


-- 
Larry Kingery 
         Error: Keyboard not attached. Press F1 to continue.

<Prev in Thread] Current Thread [Next in Thread>