Networker

Re: [Networker] Large directories slow acces using save -I input_filename

2003-07-24 21:27:06
Subject: Re: [Networker] Large directories slow acces using save -I input_filename
From: Brendan Sandes <Brendan.Sandes AT DSE.VIC.GOV DOT AU>
To: NETWORKER AT LISTMAIL.TEMPLE DOT EDU
Date: Fri, 25 Jul 2003 11:26:50 +1000
Hi Lubos.

I noticed on the NT client that I could specify a saveset entry in the gui
as
<path>/a*
<path>/b*
etc.

It came up with some errors

  <server>:e:\lotus\domino\data\mail\a*: No full backups of this save set
were found in the media database; performing a full backup
* <server>:e:\lotus\domino\data\mail\a* stdin directives line 0: parse
error
  <server>: e:\lotus\domino\data\mail\a* level=full,  16 GB 02:26:15 161
files

(have replaced the name of the server with >server>.  However, I could
still see the files and restore them using the gui so it obviously worked.
 Given our backups finish in the time frame anyway, I decided to go back
to "ALL" in the save set screen just in case.  This took our backup time
down from 11 hours to 5:30ish.  Until such time as I figure out how to get
rid of the errors, I probably won't use this method.

It seemed to work for both files and directories

(note that you also then have to configure another client that backs up
everything except this directory using a directive (use null not skip))

Given that you are using unix though, you could also do a similiar thing
with find and split it into multiple save streams by forking it off.  (I
haven't checked this works -  some syntx may be wrong, but you'll get the
idea anyway

e.g.
#!/bin/sh
for XX in a b c d e f g
do
    find <Directory name> -name ${XX}\* -print > /tmp/a.txt
    save -s lgt-blan -q -I /tmp.a.txt -b"BGW" &
  while [ `ps -ef|grep save|grep -v grep |wc -l` -gt 5 ]
  do
      sleep 360
  done
done

Also put in a check so you don't run too many saves at once.  how many you
can do at once is something you will have to determine.  You could then
configure the client on the networker server to backup everything except
that directory.

hope some of this helps.

Cheers!
Brendan



lubos.bohm AT OSKARMOBIL DOT CZ
Sent by: NETWORKER AT LISTMAIL.TEMPLE DOT EDU
24/07/2003 10:48 PM
Please respond to
NETWORKER AT LISTMAIL.TEMPLE DOT EDU; Please respond to
lubos.bohm AT OSKARMOBIL DOT CZ


To
NETWORKER AT LISTMAIL.TEMPLE DOT EDU
cc

Subject
[Networker] Large directories slow acces using save -I input_filename






Hi All,


trying to backup aprox. 200.000 files listed in "input_filename" from
large
directory containing aprox. 960.000 files takes IMHO too long. Well, it
takes hours before the tape is even loaded and first bytes are written..

We use:
1/ save -s lgt-blan -q -I /var/opt/Storage/Apr.in -b"BGW"
   on Solaris 8 with VXFS
2/ Networker 6.1.1 build 238 on both sides

I suppose the problem is inside the "save" - it can not search through the
large directory.

Folks, what are your experiences? Could it be a bug solved in new version?
Or should I better take care about pre-processing these files like moving
them in a directory tree (like for example Squid does)? Or some advanced
VxFS tuning?

Thanks a lot, Lubos

--
Note: To sign off this list, send a "signoff networker" command via email
to listserv AT listmail.temple DOT edu or visit the list's Web site at
http://listmail.temple.edu/archives/networker.html where you can
also view and post messages to the list.
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=

--
Note: To sign off this list, send a "signoff networker" command via email
to listserv AT listmail.temple DOT edu or visit the list's Web site at
http://listmail.temple.edu/archives/networker.html where you can
also view and post messages to the list.
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=

<Prev in Thread] Current Thread [Next in Thread>