Re: [Networker] 45M iNode FS backup?
2011-05-10 15:19:40
In regard to: Re: [Networker] 45M iNode FS backup?, Brian Narkinsky said...:
This was from a thread back in 2006. I've used this method several
places and it works well. Works on UNIX OSes as well.
Yes, it's been described a lot, and I referred to it (as the "split
client" method) in an earlier post in the thread.
It's no substitute for having decent support for large filesystems built
into the software. EMC has actually made this method more problematic
to use than it was when it was first described on the list, because
recent versions of the software prevent you from having multiple instances
of the same client in one group if any of them use the "All" saveset.
In the end, that NetWorker administrators have to jump through hoops like
this just to back up large filesystems says a lot about EMC's support
for the product.
Tim
I think somebody posted a method to get all the directories in a file
system.
If I remember correctly they had two client instances.
One had all the directories explicitly stated
E:\dir
E:\dir2
E:\dirx
The second client instance had the saveset ALL but, had a directive to
skip the directories that were explicitly spelled out in the first
client.
This allowed you to spilt the filesystem into streams but also not miss
any new directories.
Another option might be to specify the save set as \\.\physicaldiskx
There was and exchange between me and somebody (Thierry I think) a year
or two ago about this. There are some serious problems with doing this
when Restore time comes.
You can only do fulls since you are basically dumping the file system.
You can only restore the whole disk back to the same drive letter no
restoring one or two files etc.
This will allow you to kind of do a dump of the NTFS partition. When I
tried it was very fast. IT is sort of a poor man's SnapImage.
Brian
On Tue, May 10, 2011 at 2:46 PM, Tim Mooney <Tim.Mooney AT ndsu DOT edu> wrote:
In regard to: Re: [Networker] 45M iNode FS backup?, Yaron Zabary said
(at...:
I think that any contemporary major backup software should be able to
realize that the file system it is backing up is large (actually this should
be done for each directory) and should be able to launch as many parallel
threads that are required to make sure that the backup will not crawl.
I totally agree. Considering that this type of functionality was
actually a hidden/experimental feature in NetWorker several years ago
that was later disabled, I'm very disappointed that we still do not have
this type of capability in NetWorker.
--
Tim Mooney Tim.Mooney AT ndsu DOT
edu
Enterprise Computing & Infrastructure 701-231-1076 (Voice)
Room 242-J6, IACC Building 701-231-8541 (Fax)
North Dakota State University, Fargo, ND 58105-5164
To sign off this list, send email to listserv AT listserv.temple DOT edu and type
"signoff networker" in the body of the email. Please write to networker-request
AT listserv.temple DOT edu if you have any problems with this list. You can access the
archives at http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
|
|
|