I have not forgotten that simultaneous clones/stages and recoveries are not
possible today from
the same AFTD.
However, I do not see that task 1 issuing a clone for saveset A from AFTD0 to
POOL0, task 2
issuing a clone for saveset B from AFTD1 to POOL0, and task 3 issuing a stage
from AFTD1 to
POOL1 must wind up with task 1 running, task 2 holding AFTD1 and waiting for a
tape in POOL0 to
be available, and task3 holding a tape in POOL1 waiting for AFTD1 to be
available.
Nope, it doesn't have to be that way. If all of those tasks acquired their
source (AFTD0/1)
and THEN acquired their target (a tape in POOL0/1) - task 1 and 2 would run,
then task 3 would
run. No problem. As it is right now, task 1 runs, and task 2 & 3 deadlock
each other until
task 1 completes. Then task2 and task3 fight it out over who gets to read from
AFTD1 and they
run in succession. While task1 is running, both AFTDs and both tape drives
are held hostage.
Frank
On 2/28/12 1:13 PM, bingo wrote:
> As long as 2 different initiators try to control the environment at the same
> time using different destination pools this is the obvious result. My
> suggestion: Let the DB admins run their backups but take control over cloning
> and staging to avoid such problems in the future.
>
> I would also use the argument that you would be faster able to serve a
> recover request. Do not forget that simultaneous clones/stages and recoveries
> are not possible today.
>
--
Frank Swasey | http://www.uvm.edu/~fcs
Sr Systems Administrator | Always remember: You are UNIQUE,
University of Vermont | just like everyone else.
"I am not young enough to know everything." - Oscar Wilde (1854-1900)
To sign off this list, send email to listserv AT listserv.temple DOT edu and
type "signoff networker" in the body of the email. Please write to
networker-request AT listserv.temple DOT edu if you have any problems with this
list. You can access the archives at
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
|