Re: [Networker] How to save multiple paths?
2006-01-20 14:52:28
Please see my replies inserted below.
Darren Dunham wrote:
What I wanted is multiple save sets, so each path or directory has its
own unique name. Obviously, one save command will create a single ssid,
so I will need to write something to launch multiple saves, one for each
path, but not to exceed the desired parallelism for the client.
Can I ask why? I'm sure there's some reasons, but this seems rather
esoteric. I'd like to make sure were not overlooking some other way of
handling this.
Sure. Allow me to explain. I have gzipped data base files like the
following:
/raid/home/user/Backups/dbname_01.08.06.gz
/raid/home/user/Backups/dbname_01.13.06.gz
wherein each is named according to the date (mm.dd.yy) that is was
created. There are several created each month. The nightly incrementals
grab this data, along with other stuff under the 'Backups' directory,
but this pool is subject to recycling. This pool also does fulls. Now,
this pool is fine for short term recoveries, but once you get past the
browse time, it's kind of a guessing game as far as which instance of
the saveset for /raid is going to have the desired file, and /raid is
huge, so even if you knew, it would take a long time to read through all
of /raid. Now, of course, I could create a separate saveset that just
listed /raid/home/user/Backups, but again, I want to be more specific so
later save set recoveries can better pinpoint the file.
To get around this, I have another pool that is not subject to
recycling, and I use it just for the gzipped files. In other words, no
volumes in that pool are ever recycled. So, I later come along and back
up the same gzipped files to this pool, but I'm tired of having to list
all the sundry names in the save set listing for the given client nsr
resource in the GUI. Up until now, I've been manually listing all of
them, and then since the client's parallelism is set to 4, it send 4
separate streams to tape at a time, and since each was listed separately
in the save set list, I then see each listed as a separate entry in save
set recover, which is what I want -- all fine and well. This makes it
easy for me to later find one I need since they're not all lumped under
one common save set name. What I would prefer, HOWEVER, would be to not
have to edit the save set list every time I want to run a new batch.
Instead, I'd like a script that I'd run on the client that would figure
out which ones have not been backed up and then it backs them up, BUT
taking advantage of the client's parallelism so it's not just running
one at a time like it would if they were all listed in an input file. In
this case, it would instead run 4 at a time, assuming there were at
least 4, and as soon as one is done, it launches the next, always
keeping 4 running at a time until its finished.
I have indexing turned off on the pool, so no need to worry about the
client index becoming too large, and these are run on an adhoc basis.
I doubt it'll be too large if you're not doing them often, but indexing
on the pool only affects whether the indexes will be sent to tape, not
whether they are stored. You'd need to turn the browse period down to
reduce the impact of index storage on the disk.
Actually, according to the GUI help, if 'Store File Index Entries' is
set to 'Yes' then the index entries from the backup will be included in
the online index. I have this value set to 'No' for the pool . When I
browse, I don't see them listed, so the index on disk is not getting
updated, and I assume this is why. But, I don't need to be able to
browse these because save set recover lists them by name, and/or I can
use mminfo to get their names. This would not be possible if they were
just getting all lumped under one common name like /raid, which they are
for the recycle pool. The browse time for the client is one month, and
the other data that is not going to this pool is browseable, and that
pool has this field set to 'Yes'. Are you sure you're not referring to
the 'No Index save' option on the group ? The GUI help states that this
controls whether or not the index is written to tape.
To sign off this list, send email to listserv AT listserv.temple DOT edu and type
"signoff networker" in the
body of the email. Please write to networker-request AT listserv.temple DOT edu
if you have any problems
wit this list. You can access the archives at
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
|
|
|