Veritas-bu

Re: [Veritas-bu] old policies

2010-09-13 13:27:41
Subject: Re: [Veritas-bu] old policies
From: "Rich Hansen (IT)" <Rich.Hansen AT ros DOT com>
To: <veritas-bu AT mailman.eng.auburn DOT edu>
Date: Mon, 13 Sep 2010 10:26:55 -0700
I zip the Policy in /usr/openv/netbackup/db/class to an archive
directory, then delete the Policy.  If the specifics of the Policy ever
needs to be referred to, or if it needs to be reinstated, it is easily
done.

Thanks

-----Original Message-----
From: veritas-bu-bounces AT mailman.eng.auburn DOT edu
[mailto:veritas-bu-bounces AT mailman.eng.auburn DOT edu] On Behalf Of
veritas-bu-request AT mailman.eng.auburn DOT edu
Sent: Friday, September 10, 2010 1:21 PM
To: veritas-bu AT mailman.eng.auburn DOT edu
Subject: Veritas-bu Digest, Vol 53, Issue 6

Send Veritas-bu mailing list submissions to
        veritas-bu AT mailman.eng.auburn DOT edu

To subscribe or unsubscribe via the World Wide Web, visit
        http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
or, via email, send a message with subject or body 'help' to
        veritas-bu-request AT mailman.eng.auburn DOT edu

You can reach the person managing the list at
        veritas-bu-owner AT mailman.eng.auburn DOT edu

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Veritas-bu digest..."


Today's Topics:

   1. Re: old policies (David McMullin)
   2. Re: old policies (Ed Wilts)
   3. Re: Email Notifications - Changed after 6.5.6 (William Brown)
   4.  Windows 2008 R2 NBU 7.0 Master,  attempting CIFS Basic Disk S
      (spaldam)
   5. emmlib_UpdateDriveRuntime failed, status=258 & status=304
      (Michael Graff Andersen)
   6. NBU 6.5.6 client on FreeBSD 7.2 host (Nate Sanders)
   7. Re: NBU 6.5.6 client on FreeBSD 7.2 host (Martin, Jonathan)
   8. Re: NBU 6.5.6 client on FreeBSD 7.2 host (Nate Sanders)
   9. Re: NBU 6.5.6 client on FreeBSD 7.2 host (Nate Sanders)
  10. Re: NBU 6.5.6 client on FreeBSD 7.2 host (Bryan Bahnmiller)
  11. Re: NBU 6.5.6 client on FreeBSD 7.2 host (Nate Sanders)


----------------------------------------------------------------------

Message: 1
Date: Thu, 9 Sep 2010 15:14:53 -0400
From: "David McMullin" <David.McMullin AT CBC-Companies DOT com>
Subject: Re: [Veritas-bu] old policies
To: "veritas-bu AT mailman.eng.auburn DOT edu"
        <veritas-bu AT mailman.eng.auburn DOT edu>
Message-ID:
        
<C83D277D82594A4C8A269B151C83B89A013A1C83942F AT CBCMAIL07 DOT cbc.local>
Content-Type: text/plain; charset=us-ascii

My standard is to make a policy that never runs named "ARCHIVED" and put
the clients there.
If clients are not in a policy they do not show up in various lists, NB
should scan the images directory for them but does not...

It is also easier when installing to put them in a policy that does not
run on a schedule so adding and testing is simplified.

------------------------------

Message: 2
Date: Wed, 1 Sep 2010 07:49:29 -0700 (PDT)
From: Carlos Alberto Lima dos Santos <carlos_listas AT yahoo.com DOT br>
Subject: [Veritas-bu] Res:  Deleting policies
To: "VERITAS-BU AT MAILMAN.ENG.AUBURN DOT EDU"
        <VERITAS-BU AT MAILMAN.ENG.AUBURN DOT EDU>
Message-ID: <669254.40566.qm AT web52403.mail.re2.yahoo DOT com>
Content-Type: text/plain; charset=iso-8859-1

Delete policies do not affect the images backups, but?you need remember
the 
client name when?you need a restore to find the images.

T+

?========================================
Carlos Alberto L. dos Santos (TOCA)
Eng. de Computa??o - Jundia? - SP Brasil
http://www.linkedin.com/in/carlostoca
http://netbackupblog.blogspot.com/
carlos_listas AT yahoo.com DOT br
---------------------------------------- 



----- Mensagem original ----
De: Nate Sanders <sandersn AT dmotorworks DOT com>
Para: "VERITAS-BU AT MAILMAN.ENG.AUBURN DOT EDU"
<VERITAS-BU AT MAILMAN.ENG.AUBURN DOT EDU>
Enviadas: Ter?a-feira, 31 de Agosto de 2010 15:33:54
Assunto: [Veritas-bu] Deleting policies

In NBU 6.5.6, is there any danger in deleting policies via the GUI? Does
this effect the ability to restore, reference, scan, search, or dig up
information about old images on tape that used these now removed
policies?

-- 
Nate Sanders? ? ? ? ? ? Digital Motorworks
System Administrator? ? ? (512) 692 - 1038




This message and any attachments are intended only for the use of the
addressee 
and may contain information that is privileged and confidential. If the
reader 
of the message is not the intended recipient or an authorized
representative of 
the intended recipient, you are hereby notified that any dissemination
of this 
communication is strictly prohibited. If you have received this
communication in 
error, please notify us immediately by e-mail and delete the message and
any 
attachments from your system.
_______________________________________________
Veritas-bu maillist? -? Veritas-bu AT mailman.eng.auburn DOT edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu




------------------------------

Message: 2
Date: Thu, 9 Sep 2010 15:10:02 -0500
From: Ed Wilts <ewilts AT ewilts DOT org>
Subject: Re: [Veritas-bu] old policies
To: David McMullin <David.McMullin AT cbc-companies DOT com>
Cc: "veritas-bu AT mailman.eng.auburn DOT edu"
        <veritas-bu AT mailman.eng.auburn DOT edu>
Message-ID:
        <AANLkTimgkQnjgMA_UrvdT-k+n+KCp2+Kgx_ss6whuhUZ AT mail.gmail DOT com>
Content-Type: text/plain; charset="iso-8859-1"

On Thu, Sep 9, 2010 at 2:14 PM, David McMullin <
David.McMullin AT cbc-companies DOT com> wrote:

> My standard is to make a policy that never runs named "ARCHIVED" and
put
> the clients there.
> If clients are not in a policy they do not show up in various lists,
NB
> should scan the images directory for them but does not...
>

One client per policy solves a lot of problems.  You can then just
deactivate the policy...

Ed Wilts, RHCE, BCFP, BCSD, SCSP, SCSE
ewilts AT ewilts DOT org
Linkedin <http://www.linkedin.com/in/ewilts>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
http://mailman.eng.auburn.edu/pipermail/veritas-bu/attachments/20100909/
47479c02/attachment-0001.htm 

------------------------------

Message: 3
Date: Fri, 10 Sep 2010 00:41:46 +0200
From: "William Brown" <william.d.brown AT gsk DOT com>
Subject: Re: [Veritas-bu] Email Notifications - Changed after 6.5.6
To: "VERITAS-BU AT mailman.eng.auburn DOT edu"
        <VERITAS-BU AT mailman.eng.auburn DOT edu>
Message-ID:
        
<9318C6267A8C894DB030C49D5CE1B90A3597204A23 AT 019D-EUMSG-02.019D.MGD DOT MSFT.
NET>
        
Content-Type: text/plain; charset="utf-8"

The upper one looks like those from the ?notify? scripts on
/usr/openv/netbackup/bin.  They get overwritten which is a pain.  I also
discovered that at 6.5.6 some get extra parameters related to
multi-stream jobs.  Amusingly one gets I think 7 parameters now, checks
it has at least 6, and the comment says 5?  If you used to get them then
you had uncommented the lines at the end that have the mail commands.
The logs that are mailed are otherwise overritten by the next job.

They are not I think always overwritten, so you may not have seen it
before.  Somewhere in the documents I do recall it warns of this and you
should save them.  Luckily it does of course save them itself in the tar
file it keeps to support backing out the patch.  I recovered our custom
ones from there, and at that point found that the underlying template
had changed.  As we run both 6.5.6 and 6.5.4 that is a bore as I now
need 2 variants or more customisation, as they get called with different
numbers of parameters.

The lower one I also get but that is from the ?client sends email? or
?server sends email? settings in the server properties.

William D L Brown

From: veritas-bu-bounces AT mailman.eng.auburn DOT edu
[mailto:veritas-bu-bounces AT mailman.eng.auburn DOT edu] On Behalf Of WEAVER,
Simon (external)
Sent: 09 September 2010 15:26
To: VERITAS-BU AT mailman.eng.auburn DOT edu
Subject: [Veritas-bu] Email Notifications - Changed after 6.5.6


Hi All,
Got a little puzzle, that I cannot seem to track where the problem is,
but here goes...

Win2k3 SP2 Ent + NBU 6.5.6 Master, Media and all clients.

It all went on well (Originally 6.5.3) apart from BMR Boot Server still
showing 6.5.3, although I will look into that later by reapplying the
update pack.

But main problem is notification of emails seemed to have changed. I
used to get all emails, regardless of status detailing this....

Tue 09/07/2010  07:10 -----------------------------
Tue 09/07/2010  07:10        CLIENT:  SQL
Tue 09/07/2010  07:10        POLICY:  SQL_SERVER
Tue 09/07/2010  07:10      SCHEDULE:  SERVER_FULL
Tue 09/07/2010  07:10 SCHEDULE TYPE:  FULL
Tue 09/07/2010  07:10        STATUS:  0
Tue 09/07/2010  07:10        STREAM:  0
Tue 09/07/2010  07:10 -----------------------------

But since the update, I am now only getting emails with status 1 or
higher and they look like this in the email....

Backup on client FILE001 for user root by server MyBackup was partially
successful.

Policy = TEST_BACKUP
Schedule = Test_Incr_Backup

File list
---------
C:\Colours.reg

For the life of me, I have checked my backup_exit_notify.cmd and
nbmail.cmd and I even have the original versions prior to the update,
and yet these have not changed.

Any ideas?

Regards

Si

This email (including any attachments) may contain confidential

and/or privileged information or information otherwise protected

from disclosure. If you are not the intended recipient, please

notify the sender immediately, do not copy this message or any

attachments and do not use it for any purpose or disclose its

content to any person, but delete this message and any attachments

from your system. Astrium disclaims any and all liability if this

email transmission was virus corrupted, altered or falsified.

-o-

Astrium Limited, Registered in England and Wales No. 2449259

Registered Office:

Gunnels Wood Road, Stevenage, Hertfordshire, SG1 2AS, England



-----------------------------------------------------------
This e-mail was sent by GlaxoSmithKline Services Unlimited 
(registered in England and Wales No. 1047315), which is a 
member of the GlaxoSmithKline group of companies. The 
registered address of GlaxoSmithKline Services Unlimited 
is 980 Great West Road, Brentford, Middlesex TW8 9GS.
-----------------------------------------------------------
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
http://mailman.eng.auburn.edu/pipermail/veritas-bu/attachments/20100910/
a6248bae/attachment-0001.htm 

------------------------------

Message: 4
Date: Thu, 09 Sep 2010 23:10:03 -0400
From: spaldam <netbackup-forum AT backupcentral DOT com>
Subject: [Veritas-bu]  Windows 2008 R2 NBU 7.0 Master,  attempting CIFS
        Basic Disk S
To: VERITAS-BU AT MAILMAN.ENG.AUBURN DOT EDU
Message-ID: <1284088203.m2f.342507 AT www.backupcentral DOT com>


mozje wrote:
> Just as a followup should people care. This is working perfectly in
nbu, you only need to setup the cifs share correctly :) which was not
the case at my first attempts.


How do you set it up correctly? (as apposed to incorrectly)

Thanks.

+----------------------------------------------------------------------
|This was sent by spaldam AT spaldam DOT com via Backup Central.
|Forward SPAM to abuse AT backupcentral DOT com.
+----------------------------------------------------------------------




------------------------------

Message: 5
Date: Fri, 10 Sep 2010 10:49:06 +0200
From: Michael Graff Andersen <mian71 AT gmail DOT com>
Subject: [Veritas-bu] emmlib_UpdateDriveRuntime failed, status=258 &
        status=304
To: "veritas-bu AT mailman.eng.auburn DOT edu"
        <VERITAS-BU AT mailman.eng.auburn DOT edu>
Message-ID:
        <AANLkTikk1GZE+UzV3fPOd3_PxZxCioO3bRoBBcz5o+Ao AT mail.gmail DOT com>
Content-Type: text/plain; charset="iso-8859-1"

Hello

We have some problems with our Windows 2003 x64 master/media server.

I have found these entries in the Application Event log, but have not
been
able to find the cause for them

Is searching in the Symantec Knowledge base, but have no luck yet

Do you have any suggestions/tips ?

Regards
Michael
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
http://mailman.eng.auburn.edu/pipermail/veritas-bu/attachments/20100910/
11d7446a/attachment-0001.htm 

------------------------------

Message: 6
Date: Fri, 10 Sep 2010 11:22:11 -0500
From: Nate Sanders <sandersn AT dmotorworks DOT com>
Subject: [Veritas-bu] NBU 6.5.6 client on FreeBSD 7.2 host
To: "veritas-bu AT mailman.eng.auburn DOT edu"
        <veritas-bu AT mailman.eng.auburn DOT edu>
Message-ID: <4C8A5B33.3060204 AT dmotorworks DOT com>
Content-Type: text/plain;  charset="iso-8859-1"

Now that we made it to 6.5.6 we're able to start testing NFS performance
from our NetApp VS NDMP. For the longest time we've done the backup of
some 1 billion small image files off the NetApp via NDMP. This job
usually took 1-3 weeks to complete a full sweep via NDMP.

Since we have support for FBSD we thought we would try doing NFS via
that client as Linux NFS is not as powerful as the BSD/Solaris variety.
Well on our initial test of a small volume from the NetApp, we're seeing
2-4MB/s performance. Confirmed via bptm log. This is going straight to
LTO4 tape, which usually backs up around 150MB/s. Logs show that the
previous NDMP jobs from the NetApp we're doing around 40MB/s direct to
two dedicated NDMP LTO4 drives.

Supposedly multiplexing for NDMP will come to NBU 7.x shortly and we
will test again with that in the future. Right now I am not multiplexing
this NFS job but while looking in bptm I don't see the usual "waited for
buffer" errors that would tell me that I _should_ increase it. Is it
still likely multiplexing would increase the overall performance here?
Is this a known issue with FBSD clients? Is there something else I
should be looking at?

-- 
Nate Sanders            Digital Motorworks
System Administrator      (512) 692 - 1038




This message and any attachments are intended only for the use of the
addressee and may contain information that is privileged and
confidential. If the reader of the message is not the intended recipient
or an authorized representative of the intended recipient, you are
hereby notified that any dissemination of this communication is strictly
prohibited. If you have received this communication in error, please
notify us immediately by e-mail and delete the message and any
attachments from your system.

------------------------------

Message: 7
Date: Fri, 10 Sep 2010 14:03:17 -0400
From: "Martin, Jonathan" <JMARTI05 AT intersil DOT com>
Subject: Re: [Veritas-bu] NBU 6.5.6 client on FreeBSD 7.2 host
To: "Nate Sanders" <sandersn AT dmotorworks DOT com>,
        <veritas-bu AT mailman.eng.auburn DOT edu>
Message-ID:
        
<13E204E614D8E04FAF594C9AA9ED0BB70F834137 AT PBCOMX02.intersil DOT corp>
Content-Type: text/plain;       charset="us-ascii"

I've tested NDMP on 6 differetnt arrays and it has never moved millions
of small files well. We maxed out backup performance on our NetApp FAS
2xxx with 2 streams at approx 20MB/sec total.  We're hoping to test
SMTape, which purportedly does a bit level dump of the entire array.  I
haven't had a chance to test this yet, but according to NetApp it will
get us our weekly full and drive LTO3. We'll then need to put some sort
of forever incremental or snapshot backup in-between the SMTape dumps.

-Jonathan

-----Original Message-----
From: veritas-bu-bounces AT mailman.eng.auburn DOT edu
[mailto:veritas-bu-bounces AT mailman.eng.auburn DOT edu] On Behalf Of Nate
Sanders
Sent: Friday, September 10, 2010 12:22 PM
To: veritas-bu AT mailman.eng.auburn DOT edu
Subject: [Veritas-bu] NBU 6.5.6 client on FreeBSD 7.2 host

Now that we made it to 6.5.6 we're able to start testing NFS performance
from our NetApp VS NDMP. For the longest time we've done the backup of
some 1 billion small image files off the NetApp via NDMP. This job
usually took 1-3 weeks to complete a full sweep via NDMP.

Since we have support for FBSD we thought we would try doing NFS via
that client as Linux NFS is not as powerful as the BSD/Solaris variety.
Well on our initial test of a small volume from the NetApp, we're seeing
2-4MB/s performance. Confirmed via bptm log. This is going straight to
LTO4 tape, which usually backs up around 150MB/s. Logs show that the
previous NDMP jobs from the NetApp we're doing around 40MB/s direct to
two dedicated NDMP LTO4 drives.

Supposedly multiplexing for NDMP will come to NBU 7.x shortly and we
will test again with that in the future. Right now I am not multiplexing
this NFS job but while looking in bptm I don't see the usual "waited for
buffer" errors that would tell me that I _should_ increase it. Is it
still likely multiplexing would increase the overall performance here?
Is this a known issue with FBSD clients? Is there something else I
should be looking at?

-- 
Nate Sanders            Digital Motorworks
System Administrator      (512) 692 - 1038




This message and any attachments are intended only for the use of the
addressee and may contain information that is privileged and
confidential. If the reader of the message is not the intended recipient
or an authorized representative of the intended recipient, you are
hereby notified that any dissemination of this communication is strictly
prohibited. If you have received this communication in error, please
notify us immediately by e-mail and delete the message and any
attachments from your system.
_______________________________________________
Veritas-bu maillist  -  Veritas-bu AT mailman.eng.auburn DOT edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


------------------------------

Message: 8
Date: Fri, 10 Sep 2010 13:41:51 -0500
From: Nate Sanders <sandersn AT dmotorworks DOT com>
Subject: Re: [Veritas-bu] NBU 6.5.6 client on FreeBSD 7.2 host
To: "Martin, Jonathan" <JMARTI05 AT intersil DOT com>
Cc: "Sanders, Nate" <sandersn AT digitalmotorworks DOT com>,
        "veritas-bu AT mailman.eng.auburn DOT edu"
        <veritas-bu AT mailman.eng.auburn DOT edu>
Message-ID: <4C8A7BEF.4070304 AT dmotorworks DOT com>
Content-Type: text/plain;  charset="iso-8859-1"

Yes we are well aware of the limitations of NDMP and small files, thus
the reason we're looking at trying NFS w/ snapshots. Our NetApp 6040 is
peaking around 40-50MB/s but what the issue is right now is that we're
getting such low performance from this FBSD box via NFS.

I turned on multiplexing to 4, and we're still seeing only 3-4MB/s.


On 09/10/2010 01:03 PM, Martin, Jonathan wrote:
> I've tested NDMP on 6 differetnt arrays and it has never moved
millions
> of small files well. We maxed out backup performance on our NetApp FAS
> 2xxx with 2 streams at approx 20MB/sec total.  We're hoping to test
> SMTape, which purportedly does a bit level dump of the entire array.
I
> haven't had a chance to test this yet, but according to NetApp it will
> get us our weekly full and drive LTO3. We'll then need to put some
sort
> of forever incremental or snapshot backup in-between the SMTape dumps.
>
> -Jonathan
>
> -----Original Message-----
> From: veritas-bu-bounces AT mailman.eng.auburn DOT edu
> [mailto:veritas-bu-bounces AT mailman.eng.auburn DOT edu] On Behalf Of Nate
> Sanders
> Sent: Friday, September 10, 2010 12:22 PM
> To: veritas-bu AT mailman.eng.auburn DOT edu
> Subject: [Veritas-bu] NBU 6.5.6 client on FreeBSD 7.2 host
>
> Now that we made it to 6.5.6 we're able to start testing NFS
performance
> from our NetApp VS NDMP. For the longest time we've done the backup of
> some 1 billion small image files off the NetApp via NDMP. This job
> usually took 1-3 weeks to complete a full sweep via NDMP.
>
> Since we have support for FBSD we thought we would try doing NFS via
> that client as Linux NFS is not as powerful as the BSD/Solaris
variety.
> Well on our initial test of a small volume from the NetApp, we're
seeing
> 2-4MB/s performance. Confirmed via bptm log. This is going straight to
> LTO4 tape, which usually backs up around 150MB/s. Logs show that the
> previous NDMP jobs from the NetApp we're doing around 40MB/s direct to
> two dedicated NDMP LTO4 drives.
>
> Supposedly multiplexing for NDMP will come to NBU 7.x shortly and we
> will test again with that in the future. Right now I am not
multiplexing
> this NFS job but while looking in bptm I don't see the usual "waited
for
> buffer" errors that would tell me that I _should_ increase it. Is it
> still likely multiplexing would increase the overall performance here?
> Is this a known issue with FBSD clients? Is there something else I
> should be looking at?
>
>   

-- 
Nate Sanders            Digital Motorworks
System Administrator      (512) 692 - 1038




This message and any attachments are intended only for the use of the
addressee and may contain information that is privileged and
confidential. If the reader of the message is not the intended recipient
or an authorized representative of the intended recipient, you are
hereby notified that any dissemination of this communication is strictly
prohibited. If you have received this communication in error, please
notify us immediately by e-mail and delete the message and any
attachments from your system.

------------------------------

Message: 9
Date: Fri, 10 Sep 2010 14:02:45 -0500
From: Nate Sanders <sandersn AT dmotorworks DOT com>
Subject: Re: [Veritas-bu] NBU 6.5.6 client on FreeBSD 7.2 host
Cc: "veritas-bu AT mailman.eng.auburn DOT edu"
        <veritas-bu AT mailman.eng.auburn DOT edu>
Message-ID: <4C8A80D5.2060302 AT dmotorworks DOT com>
Content-Type: text/plain;  charset="iso-8859-1"

Okay so that multiplex test was user error. Didn't have "max streams per
drive" setup right. At 4 streams we saw 40MB/s, at 8 streams we see
50MB/s. But... we have a new problem. Within 1-2 minutes the I/O starts
dropping. At 3:00 minutes into an 8 stream job, we're down to 38MB/s.
Earlier when testing at 4 streams, we were 10 minutes in and I/O had
slowly dropped from 40MB/s down to 12MB/s.

What in the world is going on?

On 09/10/2010 01:41 PM, Nate Sanders wrote:
> Yes we are well aware of the limitations of NDMP and small files, thus
> the reason we're looking at trying NFS w/ snapshots. Our NetApp 6040
is
> peaking around 40-50MB/s but what the issue is right now is that we're
> getting such low performance from this FBSD box via NFS.
>
> I turned on multiplexing to 4, and we're still seeing only 3-4MB/s.
>
>
> On 09/10/2010 01:03 PM, Martin, Jonathan wrote:
>   
>> I've tested NDMP on 6 differetnt arrays and it has never moved
millions
>> of small files well. We maxed out backup performance on our NetApp
FAS
>> 2xxx with 2 streams at approx 20MB/sec total.  We're hoping to test
>> SMTape, which purportedly does a bit level dump of the entire array.
I
>> haven't had a chance to test this yet, but according to NetApp it
will
>> get us our weekly full and drive LTO3. We'll then need to put some
sort
>> of forever incremental or snapshot backup in-between the SMTape
dumps.
>>
>> -Jonathan
>>
>> -----Original Message-----
>> From: veritas-bu-bounces AT mailman.eng.auburn DOT edu
>> [mailto:veritas-bu-bounces AT mailman.eng.auburn DOT edu] On Behalf Of Nate
>> Sanders
>> Sent: Friday, September 10, 2010 12:22 PM
>> To: veritas-bu AT mailman.eng.auburn DOT edu
>> Subject: [Veritas-bu] NBU 6.5.6 client on FreeBSD 7.2 host
>>
>> Now that we made it to 6.5.6 we're able to start testing NFS
performance
>> from our NetApp VS NDMP. For the longest time we've done the backup
of
>> some 1 billion small image files off the NetApp via NDMP. This job
>> usually took 1-3 weeks to complete a full sweep via NDMP.
>>
>> Since we have support for FBSD we thought we would try doing NFS via
>> that client as Linux NFS is not as powerful as the BSD/Solaris
variety.
>> Well on our initial test of a small volume from the NetApp, we're
seeing
>> 2-4MB/s performance. Confirmed via bptm log. This is going straight
to
>> LTO4 tape, which usually backs up around 150MB/s. Logs show that the
>> previous NDMP jobs from the NetApp we're doing around 40MB/s direct
to
>> two dedicated NDMP LTO4 drives.
>>
>> Supposedly multiplexing for NDMP will come to NBU 7.x shortly and we
>> will test again with that in the future. Right now I am not
multiplexing
>> this NFS job but while looking in bptm I don't see the usual "waited
for
>> buffer" errors that would tell me that I _should_ increase it. Is it
>> still likely multiplexing would increase the overall performance
here?
>> Is this a known issue with FBSD clients? Is there something else I
>> should be looking at?
>>
>>   
>>     
>   

-- 
Nate Sanders            Digital Motorworks
System Administrator      (512) 692 - 1038




This message and any attachments are intended only for the use of the
addressee and may contain information that is privileged and
confidential. If the reader of the message is not the intended recipient
or an authorized representative of the intended recipient, you are
hereby notified that any dissemination of this communication is strictly
prohibited. If you have received this communication in error, please
notify us immediately by e-mail and delete the message and any
attachments from your system.

------------------------------

Message: 10
Date: Fri, 10 Sep 2010 14:54:45 -0500
From: Bryan Bahnmiller <bbahnmiller AT dtcc DOT com>
Subject: Re: [Veritas-bu] NBU 6.5.6 client on FreeBSD 7.2 host
To: Nate Sanders <sandersn AT dmotorworks DOT com>
Cc: "veritas-bu AT mailman.eng.auburn DOT edu"
        <veritas-bu AT mailman.eng.auburn DOT edu>
Message-ID:
        
<OFD9E30BBB.CDC67AFE-ON8525779A.006CBB3A-8625779A.006D61FA AT dtcc DOT com>
Content-Type: text/plain; charset="us-ascii"

Nate,

    Any filesystem you have will start out quickly but then drop in
speed 
as it starts drilling down into the directory structure. The more 
directory levels you have, the slower it is. Which makes sense, since
you 
are sort of following a tree structure down to the lower directory
levels. 
Every time you drop down in a tree structure, you are branching to how 
ever many directories you have in that particular branch... And when you

finish one branch, you pop back up a level and branch down to the next 
one. So you are following index links to index links to .... until you
hit 
the actual file being backed up.

     Simple testing showed me long ago that the fewer levels you have in

the directory tree, the quicker the backups. And depending on the 
filesystem, it can be orders of magnitude difference in speed.

        Bryan




Nate Sanders <sandersn AT dmotorworks DOT com> 
Sent by: veritas-bu-bounces AT mailman.eng.auburn DOT edu
09/10/2010 02:02 PM

To

cc
"veritas-bu AT mailman.eng.auburn DOT edu" <veritas-bu AT mailman.eng.auburn 
DOT edu>
Subject
Re: [Veritas-bu] NBU 6.5.6 client on FreeBSD 7.2 host






Okay so that multiplex test was user error. Didn't have "max streams per
drive" setup right. At 4 streams we saw 40MB/s, at 8 streams we see
50MB/s. But... we have a new problem. Within 1-2 minutes the I/O starts
dropping. At 3:00 minutes into an 8 stream job, we're down to 38MB/s.
Earlier when testing at 4 streams, we were 10 minutes in and I/O had
slowly dropped from 40MB/s down to 12MB/s.

What in the world is going on?

On 09/10/2010 01:41 PM, Nate Sanders wrote:
> Yes we are well aware of the limitations of NDMP and small files, thus
> the reason we're looking at trying NFS w/ snapshots. Our NetApp 6040
is
> peaking around 40-50MB/s but what the issue is right now is that we're
> getting such low performance from this FBSD box via NFS.
>
> I turned on multiplexing to 4, and we're still seeing only 3-4MB/s.
>
>
> On 09/10/2010 01:03 PM, Martin, Jonathan wrote:
> 
>> I've tested NDMP on 6 differetnt arrays and it has never moved
millions
>> of small files well. We maxed out backup performance on our NetApp
FAS
>> 2xxx with 2 streams at approx 20MB/sec total.  We're hoping to test
>> SMTape, which purportedly does a bit level dump of the entire array.
I
>> haven't had a chance to test this yet, but according to NetApp it
will
>> get us our weekly full and drive LTO3. We'll then need to put some
sort
>> of forever incremental or snapshot backup in-between the SMTape
dumps.
>>
>> -Jonathan
>>
>> -----Original Message-----
>> From: veritas-bu-bounces AT mailman.eng.auburn DOT edu
>> [mailto:veritas-bu-bounces AT mailman.eng.auburn DOT edu] On Behalf Of Nate
>> Sanders
>> Sent: Friday, September 10, 2010 12:22 PM
>> To: veritas-bu AT mailman.eng.auburn DOT edu
>> Subject: [Veritas-bu] NBU 6.5.6 client on FreeBSD 7.2 host
>>
>> Now that we made it to 6.5.6 we're able to start testing NFS 
performance
>> from our NetApp VS NDMP. For the longest time we've done the backup
of
>> some 1 billion small image files off the NetApp via NDMP. This job
>> usually took 1-3 weeks to complete a full sweep via NDMP.
>>
>> Since we have support for FBSD we thought we would try doing NFS via
>> that client as Linux NFS is not as powerful as the BSD/Solaris
variety.
>> Well on our initial test of a small volume from the NetApp, we're 
seeing
>> 2-4MB/s performance. Confirmed via bptm log. This is going straight
to
>> LTO4 tape, which usually backs up around 150MB/s. Logs show that the
>> previous NDMP jobs from the NetApp we're doing around 40MB/s direct
to
>> two dedicated NDMP LTO4 drives.
>>
>> Supposedly multiplexing for NDMP will come to NBU 7.x shortly and we
>> will test again with that in the future. Right now I am not 
multiplexing
>> this NFS job but while looking in bptm I don't see the usual "waited 
for
>> buffer" errors that would tell me that I _should_ increase it. Is it
>> still likely multiplexing would increase the overall performance
here?
>> Is this a known issue with FBSD clients? Is there something else I
>> should be looking at?
>>
>> 
>> 
> 

-- 
Nate Sanders            Digital Motorworks
System Administrator      (512) 692 - 1038




This message and any attachments are intended only for the use of the 
addressee and may contain information that is privileged and
confidential. 
If the reader of the message is not the intended recipient or an 
authorized representative of the intended recipient, you are hereby 
notified that any dissemination of this communication is strictly 
prohibited. If you have received this communication in error, please 
notify us immediately by e-mail and delete the message and any
attachments 
from your system.
_______________________________________________
Veritas-bu maillist  -  Veritas-bu AT mailman.eng.auburn DOT edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu



<BR>_____________________________________________________________
<FONT size=2><BR>
DTCC DISCLAIMER: This email and any files transmitted with it are
confidential and intended solely for the use of the individual or
entity to whom they are addressed. If you have received this email
in error, please notify us immediately and delete the email and any
attachments from your system. The recipient should check this email
and any attachments for the presence of viruses.  The company
accepts no liability for any damage caused by any virus transmitted
by this email.</FONT>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
http://mailman.eng.auburn.edu/pipermail/veritas-bu/attachments/20100910/
8c32790e/attachment-0001.htm 

------------------------------

Message: 11
Date: Fri, 10 Sep 2010 15:21:10 -0500
From: Nate Sanders <sandersn AT dmotorworks DOT com>
Subject: Re: [Veritas-bu] NBU 6.5.6 client on FreeBSD 7.2 host
To: Bryan Bahnmiller <bbahnmiller AT dtcc DOT com>
Cc: "Sanders, Nate" <sandersn AT digitalmotorworks DOT com>,
        "veritas-bu AT mailman.eng.auburn DOT edu"
        <veritas-bu AT mailman.eng.auburn DOT edu>
Message-ID: <4C8A9336.2020409 AT dmotorworks DOT com>
Content-Type: text/plain;  charset="iso-8859-1"

Sure but 3-4MB/s?! NDMP to tape is 40-50MB/s. Regular jobs to tape are
160MB/s. There is no excuse why the speed should be THIS slow. So I went
back and double checked a different job for the same host, which was an
OS backup. That job was also 6MB/s. So obviously it's something with the
client and not the data/directory being backed up. All of our other OS
backups are 10x-20x faster. This must be a host problem or a FBSD client
problem.


On 09/10/2010 02:54 PM, Bryan Bahnmiller wrote:
> Nate,
>
>     Any filesystem you have will start out quickly but then drop in
speed as it starts drilling down into the directory structure. The more
directory levels you have, the slower it is. Which makes sense, since
you are sort of following a tree structure down to the lower directory
levels. Every time you drop down in a tree structure, you are branching
to how ever many directories you have in that particular branch... And
when you finish one branch, you pop back up a level and branch down to
the next one. So you are following index links to index links to ....
until you hit the actual file being backed up.
>
>      Simple testing showed me long ago that the fewer levels you have
in the directory tree, the quicker the backups. And depending on the
filesystem, it can be orders of magnitude difference in speed.
>
>         Bryan
>
>
>
> Nate Sanders <sandersn AT dmotorworks DOT com>
> Sent by: veritas-bu-bounces AT mailman.eng.auburn DOT edu
>
> 09/10/2010 02:02 PM
> To
> cc
> "veritas-bu AT mailman.eng.auburn DOT edu"
<veritas-bu AT mailman.eng.auburn DOT edu>
> Subject
> Re: [Veritas-bu] NBU 6.5.6 client on FreeBSD 7.2 host
>
>
>
>
>
> Okay so that multiplex test was user error. Didn't have "max streams
per
> drive" setup right. At 4 streams we saw 40MB/s, at 8 streams we see
> 50MB/s. But... we have a new problem. Within 1-2 minutes the I/O
starts
> dropping. At 3:00 minutes into an 8 stream job, we're down to 38MB/s.
> Earlier when testing at 4 streams, we were 10 minutes in and I/O had
> slowly dropped from 40MB/s down to 12MB/s.
>
> What in the world is going on?
>
> On 09/10/2010 01:41 PM, Nate Sanders wrote:
>   
>> Yes we are well aware of the limitations of NDMP and small files,
thus
>> the reason we're looking at trying NFS w/ snapshots. Our NetApp 6040
is
>> peaking around 40-50MB/s but what the issue is right now is that
we're
>> getting such low performance from this FBSD box via NFS.
>>
>> I turned on multiplexing to 4, and we're still seeing only 3-4MB/s.
>>
>>
>> On 09/10/2010 01:03 PM, Martin, Jonathan wrote:
>>
>>     
>>> I've tested NDMP on 6 differetnt arrays and it has never moved
millions
>>> of small files well. We maxed out backup performance on our NetApp
FAS
>>> 2xxx with 2 streams at approx 20MB/sec total.  We're hoping to test
>>> SMTape, which purportedly does a bit level dump of the entire array.
I
>>> haven't had a chance to test this yet, but according to NetApp it
will
>>> get us our weekly full and drive LTO3. We'll then need to put some
sort
>>> of forever incremental or snapshot backup in-between the SMTape
dumps.
>>>
>>> -Jonathan
>>>
>>> -----Original Message-----
>>> From: veritas-bu-bounces AT mailman.eng.auburn DOT edu
>>> [mailto:veritas-bu-bounces AT mailman.eng.auburn DOT edu] On Behalf Of Nate
>>> Sanders
>>> Sent: Friday, September 10, 2010 12:22 PM
>>> To: veritas-bu AT mailman.eng.auburn DOT edu
>>> Subject: [Veritas-bu] NBU 6.5.6 client on FreeBSD 7.2 host
>>>
>>> Now that we made it to 6.5.6 we're able to start testing NFS
performance
>>> from our NetApp VS NDMP. For the longest time we've done the backup
of
>>> some 1 billion small image files off the NetApp via NDMP. This job
>>> usually took 1-3 weeks to complete a full sweep via NDMP.
>>>
>>> Since we have support for FBSD we thought we would try doing NFS via
>>> that client as Linux NFS is not as powerful as the BSD/Solaris
variety.
>>> Well on our initial test of a small volume from the NetApp, we're
seeing
>>> 2-4MB/s performance. Confirmed via bptm log. This is going straight
to
>>> LTO4 tape, which usually backs up around 150MB/s. Logs show that the
>>> previous NDMP jobs from the NetApp we're doing around 40MB/s direct
to
>>> two dedicated NDMP LTO4 drives.
>>>
>>> Supposedly multiplexing for NDMP will come to NBU 7.x shortly and we
>>> will test again with that in the future. Right now I am not
multiplexing
>>> this NFS job but while looking in bptm I don't see the usual "waited
for
>>> buffer" errors that would tell me that I _should_ increase it. Is it
>>> still likely multiplexing would increase the overall performance
here?
>>> Is this a known issue with FBSD clients? Is there something else I
>>> should be looking at?
>>>
>>>
>>>
>>>       
>>     
> --
> Nate Sanders            Digital Motorworks
> System Administrator      (512) 692 - 1038
>
>
>
>
> This message and any attachments are intended only for the use of the
addressee and may contain information that is privileged and
confidential. If the reader of the message is not the intended recipient
or an authorized representative of the intended recipient, you are
hereby notified that any dissemination of this communication is strictly
prohibited. If you have received this communication in error, please
notify us immediately by e-mail and delete the message and any
attachments from your system.
> _______________________________________________
> Veritas-bu maillist  -  Veritas-bu AT mailman.eng.auburn DOT edu
> http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
>
>
> _____________________________________________________________
> DTCC DISCLAIMER: This email and any files transmitted with it are
confidential and intended solely for the use of the individual or entity
to whom they are addressed. If you have received this email in error,
please notify us immediately and delete the email and any attachments
from your system. The recipient should check this email and any
attachments for the presence of viruses. The company accepts no
liability for any damage caused by any virus transmitted by this email.
>
>   

-- 
Nate Sanders            Digital Motorworks
System Administrator      (512) 692 - 1038




This message and any attachments are intended only for the use of the
addressee and may contain information that is privileged and
confidential. If the reader of the message is not the intended recipient
or an authorized representative of the intended recipient, you are
hereby notified that any dissemination of this communication is strictly
prohibited. If you have received this communication in error, please
notify us immediately by e-mail and delete the message and any
attachments from your system.

------------------------------

_______________________________________________
Veritas-bu maillist  -  Veritas-bu AT mailman.eng.auburn DOT edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


End of Veritas-bu Digest, Vol 53, Issue 6
*****************************************

_______________________________________________
Veritas-bu maillist  -  Veritas-bu AT mailman.eng.auburn DOT edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu

<Prev in Thread] Current Thread [Next in Thread>