Veritas-bu

Re: [Veritas-bu] Single Node Linux NFS Server Flashbackup

2009-05-28 09:37:04
Subject: Re: [Veritas-bu] Single Node Linux NFS Server Flashbackup
From: Ed Wilts <ewilts AT ewilts DOT org>
To: "Martin, Jonathan" <JMARTI05 AT intersil DOT com>
Date: Thu, 28 May 2009 08:33:51 -0500
On Wed, May 27, 2009 at 8:46 PM, Martin, Jonathan <JMARTI05 AT intersil DOT com> wrote:
Considering our success with Flashbackup on Windows, I'm hoping to pitch the same for a new NFS file server we're looking to implement.  Unfortunately, I don't know much about VxFS.  Has anyone implemented this successfully with RHEL 5 in a NFS server configuration with multiple terabytes of data? 

As a database server, yes.  As a file server, no.  Lots of vxfs around here but our FlashBackup work on vxfs has been on Solaris.

See ftp://exftpp.symantec.com/pub/support/products/NetBackup_Enterprise_Server/279042.pdf for the compatibility details.  RHEL 5 didn't become supported until 6.5.4 which isn't due out for another few weeks.

Does anyone know what pricing and options I'll need for Storage Foundation?  I'm assuming I'll simply present the space and then use some new command line tools to format the volume with VxFS.  After it is formatted, would you say that volume more or less similarly to a ext3 or ext4 volume (permissions, NFS etc..) 

If you're already used to a logical volume manager, then vxfs isn't that bad.  Stay up to date on patches though!
 
Other than the native NBU client, NDMP and hardware based solutions, does anyone have any other ideas on quickly moving large amount of NFS file data? (Millions of small files etc..)

rsync + mfork.  It sucks less than other options.

I just finished moving a 20TB application with about 150M files on it from Ibrix file systems on Linux to a NetApp filer.  We also moved other file systems from xfs to NetApps containing another couple of hundred million files.

Grab the latest (3.0.x) release of rsync, and pipe your rsync commands into mfork for parallelism.  Run as many streams as you can until the users complain or you bury your network interfaces.  I've done both :-).  Do lots of logging.

To efficiently move the data, you need to understand the data to determine how best to parallelize the moves.  You can't blindly move the data since Murphy's law says (and we've validated) that you'll have 1 tree in the set that will take forever to move, blocking all your other moves.
    .../Ed

Ed Wilts, RHCE, BCFP, BCSD, SCSP, SCSE
ewilts AT ewilts DOT org
_______________________________________________
Veritas-bu maillist  -  Veritas-bu AT mailman.eng.auburn DOT edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
<Prev in Thread] Current Thread [Next in Thread>