Networker

[Networker] Device or resource busy during cloning?

2003-10-31 15:48:16
Subject: [Networker] Device or resource busy during cloning?
From: George Sinclair <George.Sinclair AT NOAA DOT GOV>
To: NETWORKER AT LISTMAIL.TEMPLE DOT EDU
Date: Fri, 31 Oct 2003 15:48:10 -0500
Seeing strange problem when cloning, not sure what to think! Not seen
this before.

I have two clone pool volumes: ARC_c001, ARC_c002. Both have plenty of
space. I try to clone 4 savesets. 2 are on volume: ARC001, and the other
2 are client indexes located on volume: ARC002.

ARC001, and both clone tapes are on storage node library. ARC002 is on
primary server's library.

When I run clone command as: nsrclone -s server -b 'ARC Clone' -S ssid1
ssid2 ssid3 ssid4

NetWorker clones two savesets to ARC_c001 (just what you'd expect), and
then when it starts to clone the two indexes it issues: Device or
resource busy message for the drive that has volume ARC0_c001. It then
loads ARC_c002 into another drive and clones the indexes to that tape,
not ARC_c001! If I then run the command again, using different ssids,
the same thing happens, but this time it writes the first two savesets
to ARC_c002 and then issues the Device or resource busy message on the
device containing ARC_c002 and then clones the indexes to ARC_c001.
Subsequent tests just keep producing the same swapping results. I don't
understand why NetWorker has to clone the last two savesets to another
volume member when there's plenty of space on the first one, and it
started on the first one. Makes no sense, and like I said, the results
change back and forth with every run. I even tried cloning just two
savesets, one on one tape (ARC001) and one index on another (ARC002) and
same results. This does not occur when both ssids are on the same tape,
however.

I've cloned lots of stuff before to other clone pools (not ARC Clone),
and I've never seen this behavior. My experience was always been that
NetWorker didn't care how many ssids were contained on how many
different tapes. It would clone all of them to the same clone volume and
not to a different one until it ran out of space on the former then it
goes to the next.

Anyone seen this? I tried re-labeling the tapes, still happens. We're
runnng 6.1.1. Storage node is using Linux Red Hat and primary server is
running Solaris 2.8. also 6.1.1. If I delete one of the clone volumes,
so I only have one, and try again, it just sits and waits for the second
volume after cloning the first ssid(s). Not good.

Any help appreciated.

George

--
Note: To sign off this list, send a "signoff networker" command via email
to listserv AT listmail.temple DOT edu or visit the list's Web site at
http://listmail.temple.edu/archives/networker.html where you can
also view and post messages to the list.
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=

<Prev in Thread] Current Thread [Next in Thread>