Ø Flashbackup
will improve the performance (it caches the data as a sort of snapshot then
backs-up the snapshot so it's like a large file rather than loads of small
ones, or something like that....) but because it works per volume, you need the
space for the cache, which is at least the same size as the volume you are
backing-up.
What FlashBackup really does is snapshot the volume and then do
a *physical* backup of the volume – in other words, block by
block, rather than walking the file system. For file systems with lots of
files, this can be a huge savings – we typically see FlashBackups
complete in half the time that regular backups do.
The cache volume only needs to be as large as necessary to cache
the writes to the volume while it was being backed up. Unless you are in a
very intensive write mode, this volume does not need to be as large as the
volume being backed up – in fact, I’ve backed up in excess of 10TB
of data with a shared cache volume of under 100GB and it was not getting close
to filling (several GB would have sufficed). The FlashBackup documentation
explains how to size the cache volume.
The performance gain for FlashBackup depends on the overhead of
your file system operations. For NTFS, the overhead is quite high (but our big
file systems are in the millions of files). Additionally, NTFS is not very
efficient in walking the file system in terms of allocating resources. We find
that FlashBackup is much more polite to the target than a traditional client
backup.
…/Ed
--
Ed Wilts, Mounds View, MN, USA
mailto:ewilts AT ewilts DOT org
I GoodSearch for Bundles Of Love
http://www.goodsearch.com/?charityid=821118