I got around this by automatically breaking the filesystem into portions. I
had the possibility of still failing if any directory at contained more than
1 TB but this never occurred. v5.x, of course, has worked past this
limitation.
I used this for my filelist: /<bigfs>/* and enabled mulitple streams. The
/<bigfs> was full of directories that all were sized under 1 TB. Otherwise,
I think you're manually specifying streams and that, of course, is a
maintenance problem.
Since you've got numbered directories, you might get away with something
like:
NEW_STREAM
/<bigfs>/[12]*
NEW_STREAM
/<bigfs>/[34]*
...etc...
HTH - M
-----Original Message-----
From: veritas-bu-admin AT mailman.eng.auburn DOT edu
[mailto:veritas-bu-admin AT mailman.eng.auburn DOT edu]On Behalf Of Jon D.
Benson
Sent: Tuesday, May 10, 2005 9:53 AM
To: veritas-bu AT mailman.eng.auburn DOT edu
Subject: [Veritas-bu] workarounds for max filesystem size limit
Greetings
I have run up against the max filesystem size limit in netbackup v4.5
(fp7). I have a 2.0TB filesystem that is now 63% full and now my
backups consistently fail. This filesystem is populated by our image
generating program which uses database generated numbers to create new
directories for each image set. Because the directory structure is
changing, I am not sure how I can create pathnames for the policy that
do not violate the 1.0TB filesystem maximum. Does anyone have an idea
for a workaround to this situation? Any thoughts will be considered.
As always, I appreciate the insight of the group.
Take care,
--
Jon D. Benson
Network Systems Administrator
Neurome, Inc.
La Jolla, CA
_______________________________________________
Veritas-bu maillist - Veritas-bu AT mailman.eng.auburn DOT edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
|