Not necessarily. Have you checked any of the performance stats on the filer?
Do you know if you're overrunning NVRAM or experiencing back-to-backup
I've seen this happen with NetApp before, where performance starts out fine but
if the workload is such that you start to overrun cache you're done for.
Another place to look is the NFS mount options on the client. Have you tried
testing backups with different mount options - How about modifying the rsize or
setting the noacl option (probably a bunch of overhead here with this many
files)? Have you tried jumbo frames (making sure every hop in the network is
also using jumbo frames)?
Obviously you'd need to then do some test restores to make sure the files are
restored in the state needed/expected.
You've introduced a new layer of access to this data, so you'll likely have to
test multiple configurations for this access before you find the best combo.
Identify all the different configuration points in this new path and then
breakdown the options you have for each. You may have to draw up a small
matrix to map out all the different combinations to test.
But even so, 1 billion small files is by far the largest amount I've ever heard
of, and I agree with Bryan's assessment.
From: veritas-bu-bounces AT mailman.eng.auburn DOT edu
[mailto:veritas-bu-bounces AT mailman.eng.auburn DOT edu] On Behalf Of Nate
Sent: Friday, September 10, 2010 13:21
To: Bryan Bahnmiller
Cc: Sanders, Nate; veritas-bu AT mailman.eng.auburn DOT edu
Subject: Re: [Veritas-bu] NBU 6.5.6 client on FreeBSD 7.2 host
Sure but 3-4MB/s?! NDMP to tape is 40-50MB/s. Regular jobs to tape are
160MB/s. There is no excuse why the speed should be THIS slow. So I went
back and double checked a different job for the same host, which was an
OS backup. That job was also 6MB/s. So obviously it's something with the
client and not the data/directory being backed up. All of our other OS
backups are 10x-20x faster. This must be a host problem or a FBSD client
On 09/10/2010 02:54 PM, Bryan Bahnmiller wrote:
> Any filesystem you have will start out quickly but then drop in speed as
> it starts drilling down into the directory structure. The more directory
> levels you have, the slower it is. Which makes sense, since you are sort of
> following a tree structure down to the lower directory levels. Every time you
> drop down in a tree structure, you are branching to how ever many directories
> you have in that particular branch... And when you finish one branch, you pop
> back up a level and branch down to the next one. So you are following index
> links to index links to .... until you hit the actual file being backed up.
> Simple testing showed me long ago that the fewer levels you have in the
> directory tree, the quicker the backups. And depending on the filesystem, it
> can be orders of magnitude difference in speed.
> Nate Sanders <sandersn AT dmotorworks DOT com>
> Sent by: veritas-bu-bounces AT mailman.eng.auburn DOT edu
> 09/10/2010 02:02 PM
> "veritas-bu AT mailman.eng.auburn DOT edu" <veritas-bu AT mailman.eng.auburn
> DOT edu>
> Re: [Veritas-bu] NBU 6.5.6 client on FreeBSD 7.2 host
> Okay so that multiplex test was user error. Didn't have "max streams per
> drive" setup right. At 4 streams we saw 40MB/s, at 8 streams we see
> 50MB/s. But... we have a new problem. Within 1-2 minutes the I/O starts
> dropping. At 3:00 minutes into an 8 stream job, we're down to 38MB/s.
> Earlier when testing at 4 streams, we were 10 minutes in and I/O had
> slowly dropped from 40MB/s down to 12MB/s.
> What in the world is going on?
> On 09/10/2010 01:41 PM, Nate Sanders wrote:
>> Yes we are well aware of the limitations of NDMP and small files, thus
>> the reason we're looking at trying NFS w/ snapshots. Our NetApp 6040 is
>> peaking around 40-50MB/s but what the issue is right now is that we're
>> getting such low performance from this FBSD box via NFS.
>> I turned on multiplexing to 4, and we're still seeing only 3-4MB/s.
>> On 09/10/2010 01:03 PM, Martin, Jonathan wrote:
>>> I've tested NDMP on 6 differetnt arrays and it has never moved millions
>>> of small files well. We maxed out backup performance on our NetApp FAS
>>> 2xxx with 2 streams at approx 20MB/sec total. We're hoping to test
>>> SMTape, which purportedly does a bit level dump of the entire array. I
>>> haven't had a chance to test this yet, but according to NetApp it will
>>> get us our weekly full and drive LTO3. We'll then need to put some sort
>>> of forever incremental or snapshot backup in-between the SMTape dumps.
>>> -----Original Message-----
>>> From: veritas-bu-bounces AT mailman.eng.auburn DOT edu
>>> [mailto:veritas-bu-bounces AT mailman.eng.auburn DOT edu] On Behalf Of Nate
>>> Sent: Friday, September 10, 2010 12:22 PM
>>> To: veritas-bu AT mailman.eng.auburn DOT edu
>>> Subject: [Veritas-bu] NBU 6.5.6 client on FreeBSD 7.2 host
>>> Now that we made it to 6.5.6 we're able to start testing NFS performance
>>> from our NetApp VS NDMP. For the longest time we've done the backup of
>>> some 1 billion small image files off the NetApp via NDMP. This job
>>> usually took 1-3 weeks to complete a full sweep via NDMP.
>>> Since we have support for FBSD we thought we would try doing NFS via
>>> that client as Linux NFS is not as powerful as the BSD/Solaris variety.
>>> Well on our initial test of a small volume from the NetApp, we're seeing
>>> 2-4MB/s performance. Confirmed via bptm log. This is going straight to
>>> LTO4 tape, which usually backs up around 150MB/s. Logs show that the
>>> previous NDMP jobs from the NetApp we're doing around 40MB/s direct to
>>> two dedicated NDMP LTO4 drives.
>>> Supposedly multiplexing for NDMP will come to NBU 7.x shortly and we
>>> will test again with that in the future. Right now I am not multiplexing
>>> this NFS job but while looking in bptm I don't see the usual "waited for
>>> buffer" errors that would tell me that I _should_ increase it. Is it
>>> still likely multiplexing would increase the overall performance here?
>>> Is this a known issue with FBSD clients? Is there something else I
>>> should be looking at?
> Nate Sanders Digital Motorworks
> System Administrator (512) 692 - 1038
> This message and any attachments are intended only for the use of the
> addressee and may contain information that is privileged and confidential. If
> the reader of the message is not the intended recipient or an authorized
> representative of the intended recipient, you are hereby notified that any
> dissemination of this communication is strictly prohibited. If you have
> received this communication in error, please notify us immediately by e-mail
> and delete the message and any attachments from your system.
> Veritas-bu maillist - Veritas-bu AT mailman.eng.auburn DOT edu
> DTCC DISCLAIMER: This email and any files transmitted with it are
> confidential and intended solely for the use of the individual or entity to
> whom they are addressed. If you have received this email in error, please
> notify us immediately and delete the email and any attachments from your
> system. The recipient should check this email and any attachments for the
> presence of viruses. The company accepts no liability for any damage caused
> by any virus transmitted by this email.
Nate Sanders Digital Motorworks
System Administrator (512) 692 - 1038
This message and any attachments are intended only for the use of the addressee
and may contain information that is privileged and confidential. If the reader
of the message is not the intended recipient or an authorized representative of
the intended recipient, you are hereby notified that any dissemination of this
communication is strictly prohibited. If you have received this communication
in error, please notify us immediately by e-mail and delete the message and any
attachments from your system.
Veritas-bu maillist - Veritas-bu AT mailman.eng.auburn DOT edu
Veritas-bu maillist - Veritas-bu AT mailman.eng.auburn DOT edu