Networker

Re: [Networker] Cloning a set of volumes?

2007-10-03 02:13:45
Subject: Re: [Networker] Cloning a set of volumes?
From: Yaron Zabary <yaron AT ARISTO.TAU.AC DOT IL>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Wed, 3 Oct 2007 08:08:21 +0200
George Sinclair wrote:
I would like to clone a number of tapes, all labeled into the same pool. I want to clone all the save sets on the tapes. I have 4 devices, so I would like to be able to have two simultaneous nsrclone operations running so as to complete the work faster. The clone pool will be the same for all clone volumes. I was thinking to use the nsrclone command and have it clone by volume like:

'nsrlone -s server -b clonepool -f path_to_file'

where file contains the first half of the volume names, and then launch another nsrlcone command, and have that work through the last half of the volumes. The problem is how do I avoid having one nsrclone command try to continue a save set that spans to a second tape that the second nsrclone command is already reading from? If the second nsrclone command has already finished that volume, no worries, but I'd just like to keep one process far enough ahead so as to minimize contention, timeouts, hangs, etc.

Is there a reasonably straightforward way to generate the two separate volume lists such that the order of the volumes listed will minimize the likelihood that one nsrclone process would request a tape being read by the other?

I'm not opposed to cloning by save set id wherein the file contains SSIDs and not volume names. This is how I typically manually clone various save sets, but in this case since I need everything on the volumes, doing it by volume name just seemed easier, but it might create a problem if I have to cancel the operation at some point because I need the drives for backups, and maybe the clone operation didn't finish as soon as I expected. I would not be able to rerun it since I wouldn't want to re-clone the save sets that had already been cloned, so maybe doing it by SSID would be preferred?

I don't really see how you can do that by volume, as the volume itself does not provide any information regarding save sets which might span several volumes. You could a command similar to below to tell which are those save-sets which do not span over multiple volumes and treat them differently:

# ( mminfo -q volume=000001 -r ssid ; mminfo -q volume=000002 -r ssid ) | sort| uniq -c
   1 1040364433
   1 1761777950
   1 2030209979
   1 2046986377
   1 2063763574
   1 2080540491
   1 2097317694
   1 2114094910
   1 2130872126
   1 2147649342
   1 2164425239
   1 2181202443
   1 2197979655
   1 2214752016
   1 2231529200
   1 2248306416
   1 2281820926
   1 2298598136
   1 2315375346
   1 2768328614
   1 3288418421
   2 3758176023
   1 3791730130
   2 3825284549

In this example, ssid 3758176023 and 3825284549 are on both 000001 and 000002, so they might generate the problem you fear. In your case, I suggest that you will create three files. Two with ssids which are good to clone in parallel (grouped by volume) and one with those ssids which span multiple volumes. After the two parallel nsrclone process finishes, you can run the third file as well.


We're using NW 7.2.2 on Solaris with two Linux snodes.

George



--

-- Yaron.

To sign off this list, send email to listserv AT listserv.temple DOT edu and type 
"signoff networker" in the body of the email. Please write to networker-request 
AT listserv.temple DOT edu if you have any problems with this list. You can access the 
archives at http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

<Prev in Thread] Current Thread [Next in Thread>