Presuming we're all heading in this direction (ie..backup to disk, stage
to tape) ... can you explain a little bit about staging for those of us
that aren't doing it yet?
Namely, are you automatically staging older data to tape or data of a
certain level (full backups etc)?
The interference with cloning sounds a bit troubling. Are you doing your
cloning via scripts? I guess I can see a confict if nsrclone asked for a
ssid but its been nsrstaged to tape when it thought it was on disk..
Robert Maiello
Thomson Medical Economics
On Wed, 21 May 2003 12:10:10 -0700, Thomas, Calvin
<calvin.thomas AT NACALOGISTICS DOT COM> wrote:
>My reasons for posting this are to stimulate some comments from other's
>about NW7.0 and it's new features. If you aren't interested in NW7, stop
>here.
>
>I adopted NW7 for Tru64 last month, and after the initial glow has worn off,
>here is what I find.
>
>First a few stats:
>I use Adv_File staging on the server. My current staging drive is 300GB.
>My jukebox is 140GB and it only copys at 1MB/sec. My daily backups are
>about 50GB.
>
>One problem I had with staging previously is fixed. If the system is
>staging to my jukebox, my backups still run. Successfully. Big Improvement
>here.
>
>Now to the down side.
>
>1. NW7 can now stage, and backup at the same time, but it can not stage and
>clone at the same time. When my backups finish, the automatic cloneing waits
>for the staging to finish. Since this may take up to a day on my jukebox,
>the clone doesn't start until the stageing is done. When the clone isn't
>done in a timely manner, it SEVERLY limits the functionality of the backups
>+ staging improvement.
>
>2. Automatic staging creates a list of files to stage, and then stages them
>ALL at once. After staging ALL the files to tape, it then deletes ALL the
>files at the same time. Sounds logical right? True, but not very
>functional. I was using a script previously (on 6.1.1) that staged files
>ONE by ONE to tape, and then deleted the files ONE by ONE from the file
>device. This was much superior to the ALL at once way. What I find is if
>my jukebox is full (like over a long weekend,) and the file device gets
>full, when I put in new tapes after the weekend, the staging will stage ALL
>the files at once. In my case that can be 200GB, and it takes 36 hours to
>stage all the files. This causes the automatic cloneing to fail since the
>next backup started before the file device was freed up. It also caused
>concern on my part because I only had 20GB of space left for the nightly
>50GB backup. Even though 175GB had been staged to tape, I could not free up
>the space for the next nightly backup. I was forced to kill the staging
>process, and manually start the "recover space" process to get enough room
>to do the nightly backup.
>
>The Wish List:
>It should be a relatively simple programing change for legato to change the
>staging process from an ALL (save sets) at once process to a ONE (save set)
>at a time process. I was able to accomplish this feat with a simple script
>on NW6.1.1.
>
>My first script ran just like NW7.0's does. It created a giant file of
>SSID/CLONEID's, and passed this to a single nsrstage command with the same
>problems I have with NW7.0 now.
>
>My second script was much better. It simple used AWK to run the nsrstage
>command once with a single ssid/cloneid number as input. Once the nsrstage
>command finished the single save set, it deleted the save set file from the
>file device, and then AWK would run the next nsrstage command. This allowed
>the gradual reduction in size of the files stored on the file devices, and
>it allowed backups, and clones to run (though it had to wait until after the
>current nsrstage command finished). It was kludgy, and temperamental, but
>it worked.
>
>
>What do you think? Comments everyone?
>
>--
>Note: To sign off this list, send a "signoff networker" command via email
>to listserv AT listmail.temple DOT edu or visit the list's Web site at
>http://listmail.temple.edu/archives/networker.html where you can
>also view and post messages to the list.
>=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=
--
Note: To sign off this list, send a "signoff networker" command via email
to listserv AT listmail.temple DOT edu or visit the list's Web site at
http://listmail.temple.edu/archives/networker.html where you can
also view and post messages to the list.
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=
|