I wasn't clear. By pathname, I meant a directory. I thought all our data
was multiplexed since we have the target sessions set to 5 for all
drives. In the case of directories, I've seen it come back and add more
stuff later into the tape, so this is why I wait.
Darren Dunham wrote:
> > Curious here.
> > Let's suppose I have a save set instance that's 10 GB. If I recover the
> > whole thing versus, say, a specific pathname, will they both take the
> > same time, assuming I let the recover in both cases run all the way
> > through?
> By pathname, I assume you mean a specific file? The file will take much
> less time.
> > Originally, I was thinking it would since it still has to read all the
> > data in the save set. Of course, if you're recovering a specific path,
> > you could cancel the recovery once you're sure you have everything, and
> > that would obviously save time, but unless you were sure
> Why wouldn't you be sure?
> , you'd wanna
> > ride it all the way out, so seems in that case since it has to read
> > through the whole thing anyway, it would take the same amount of time?
> > Also, if you did provide a pathname, it can't know that it's reached the
> > desired path unless it de-multiplexes the streams anyway so, again,
> > doesn't sound like much difference in time?
> Nope, the information available is considerably more granular than
> First, you can examine the file indexes to see that it knows where
> within the saveset the file is. Try something like this..
> # nsrinfo -V -N /etc/passwd <client>
> scanning client `<client>' for all savetimes from the backup namespace
> /etc/passwd, size=912, off=1167092432, app=backup(1), date=1075927189
> Wed Feb 4 12:39:49 2004
> [blah blah blah..]
> It's a 912 byte file beginning at 1167092432 into the backup.
> Then we just need to find the fragment on tape. It's easy enough to
> find the saveset..
> # mminfo -av -q 'savetime=1075927189' -r 'volume,ssid,totalsize'
> volume ssid total
> DLT.001 559752705 3897729916
> But we can also see that there are several "fragments" of this on the
> $ mminfo -av -q 'ssid=559752705' -r
> volume first last size size file rec
> DLT.001 0 625372715 610 MB 3806 MB 2 0
> DLT.001 625372716 1250906195 1221 MB 3806 MB 3 0
> DLT.001 1250906196 1876280955 1832 MB 3806 MB 4 0
> DLT.001 1876280956 2501890863 2443 MB 3806 MB 5 0
> DLT.001 2501890864 3127483339 3054 MB 3806 MB 6 0
> DLT.001 3127483340 3753229087 3665 MB 3806 MB 7 0
> DLT.001 3753229088 3897729915 3806 MB 3806 MB 8 0
> So in this case, the saveset is broken into chunks of about 600M, each
> in a different file on the tape. Since it's not multiplexing, the
> layout is very consistent, with each fragment beginning at the start of
> a file (each begins at record 0). This wouldn't be the case in a
> multiplex tape. Byte 1167092432 is found in the second fragment (file
> number 3). Also, since the record size on the tape is (hopefully)
> known, it can then just issue an fsr (forward scan record) to get to the
> right spot in the file.
> (Aside: If the st.conf on a Solaris box is screwed up, this is where
> the process can break down. It attempts to do the fsr, but because the
> record size is wrong, the verification fails. It then has to back up
> and read through the entire tape file rather than scan to the correct
> Darren Dunham ddunham AT taos DOT
> Senior Technical Consultant TAOS http://www.taos.com/
> Got some Dr Pepper? San Francisco, CA bay area
> < This line left intentionally blank to confuse you. >
> Note: To sign off this list, send a "signoff networker" command via email
> to listserv AT listmail.temple DOT edu or visit the list's Web site at
> http://listmail.temple.edu/archives/networker.html where you can
> also view and post messages to the list. Questions regarding this list
> should be sent to stan AT temple DOT edu
Note: To sign off this list, send a "signoff networker" command via email
to listserv AT listmail.temple DOT edu or visit the list's Web site at
http://listmail.temple.edu/archives/networker.html where you can
also view and post messages to the list. Questions regarding this list
should be sent to stan AT temple DOT edu