Veritas-bu

[Veritas-bu] Netbackup Disk Tuning

2006-03-14 08:31:34
Subject: [Veritas-bu] Netbackup Disk Tuning
From: simon.weaver AT astrium.eads DOT net (WEAVER, Simon)
Date: Tue, 14 Mar 2006 13:31:34 -0000
Nothing wrong with that setup. All our SAN / Master Servers use SSO, share
all 4 drives, and can use all 4 drives for backups.

Majority of all our nightly backups end at 6am and start from 7pm - so all 4
drives are available for restores if required.

Extremely rare to have any jobs overrun or jobs done during the day.
(although if you do then yes, having a spare drive available for restores is
a requirement or even for importing!)

Max Multiplex is set to 26 - we do a lot of streaminng and get very good
performance with one particular job, that use to take 3 hours to backup, is
now down to 1hr 10 mins. (done a lot of testing and tuning).

Most jobs set for multiplex of 7 - 12

Again, each environment is different

Simon Weaver 
Technical Support 
Windows Domain Administrator 

EADS Astrium 
Tel: 02392-708598 

Email: Simon.Weaver AT Astrium.eads DOT net 



-----Original Message-----
From: Spearman, David [mailto:spe08 AT co.henrico.va DOT us] 
Sent: 14 March 2006 13:25
To: WEAVER, Simon; Clooney
Cc: veritas-bu AT mailman.eng.auburn DOT edu
Subject: RE: [Veritas-bu] Netbackup Disk Tuning


Simon,

I can only tell you what we do

W2k master media plus a w2k3 media server (5.1mp4), everything attached to
an ADIC scaler 2000 with 10 drives, SSO shared with master media and media
servers. 

The master server is set to use 3 Max Concurrent drives (which means our
master server will never use more than 3 drives even though it "sees" all
10.

The media server is set to use 6 Max Concurrent drives (which means it will
never use more than 6 drives even though it "sees" all 10.

You notice we always leave one drive available for restores, that may not be
necessary in your case. The way this is set is to insure the load is split
properely during peak periods. (This gave us about a 15% gain in throughput)

Our maximum multiplexing is set at 3 which means each drive will multiplex
no more than 3 jobs per tape unit. In some circumstances it may do less than
that if that is all that is in the queue. Through a lot of testing we have
found that our environment works best with a maximum multiplex of 3. However
when I set up my policies we set the maximum multiplexing for 32. The
limiting factor is at the library level (3), but if for some reason we
needed to open up (or choke down) the policies will not need to be edited.

David Spearman
County of Henrico,Va.


-----Original Message-----
From: veritas-bu-admin AT mailman.eng.auburn DOT edu
[mailto:veritas-bu-admin AT mailman.eng.auburn DOT edu] On Behalf Of WEAVER, 
Simon
Sent: Tuesday, March 14, 2006 8:08 AM
To: 'Clooney'
Cc: veritas-bu AT mailman.eng.auburn DOT edu
Subject: RE: [Veritas-bu] Netbackup Disk Tuning



Hi
In the properties for the Master Storage Unit, is it set to use Max
Concurrent Drives Used For Backups set to 4 correctly? 

Although 4 drives are available, is it set for 4 to be used in the
properties?

Also, what is set for the Maximum Multiplexing per drive set at?? 

Thanks

Simon Weaver 
Technical Support 
Windows Domain Administrator 

EADS Astrium 
Tel: 02392-708598 

Email: Simon.Weaver AT Astrium.eads DOT net 



-----Original Message-----
From: Clooney [mailto:d_clooney AT yahoo DOT com] 
Sent: 14 March 2006 11:28
To: WEAVER, Simon; veritas-bu AT mailman.eng.auburn DOT edu
Subject: RE: [Veritas-bu] Netbackup Disk Tuning


Simon

Not extensively in my opinion , currently running :

Mater server > 
2 jobs active to tape , 
2 Q'ed ( spitting out 134's ) until either active job completes 

Four drives available on master server , only two used , max jobs set
correctly on both storage unit, global attributes and client 

Drives aren't down and tapes not stuck either , alternate drives used
however only 2 at a time and Q'ed jobs give 134's until 1 out of two jobs
complete.

Cant figure the 134's and why 2/4 drives are only being used , 

Media server > 
7 jobs to disk > running fine mount on media server.
1 duplication >  job running fine to directly attached tape 
1 duplication > mounts tape and then hangs on reading > using master
server's storage unit as mentioned above.

Cannot fatham , why only two dives being used and 134's.

Any suggestions would be much appreciated.

Regards

David

--- "WEAVER, Simon" <simon.weaver AT astrium.eads DOT net> wrote:

> 
> David
> Do all the backups run at the same time (ie: 7:00pm) - Maybe they can 
> be staged? We used to run into 134 a long, long time ago, but since 
> backups are staggered through the night, they are long long gone!
> 
> Simon Weaver
> Technical Support
> Windows Domain Administrator
> 
> EADS Astrium
> Tel: 02392-708598
> 
> Email: Simon.Weaver AT Astrium.eads DOT net
> 
> 
> 
> -----Original Message-----
> From: Clooney [mailto:d_clooney AT yahoo DOT com]
> Sent: 14 March 2006 10:43
> To: veritas-bu AT mailman.eng.auburn DOT edu
> Subject: [Veritas-bu] Netbackup Disk Tuning
> 
> 
> HI All
> 
> Scenario:
> 
> Master server > HP-UX gbaheu17 B.11.00 U 9000/800
> Media Server  > Linux gbahel25.gb.tntpost.com 2.4.21-20.ELsmp Tape 
> Library > Storagetek using SN6000
> 
> A large portion of backups are written to disk mounted on the media 
> server and then disk staged at a later to tape . Along with this there

> are backups that are qritten directly to tape.
> 
> I am getting the feeling the master and media server are just too busy 
> as there are extremly high number's 219's and a considerable amount of

> 134's ( indicating the server is lacking resoure)
> 
> Does anyone have a fine tuning doc for the above master and media 
> server to point me in the right direction to try and start overcoming 
> the resource hungery environment.
> 
> Much appreciated
> 
> David Clooney
> 
> 
> 
> __________________________________________________
> Do You Yahoo!?
> Tired of spam?  Yahoo! Mail has the best spam protection around
> http://mail.yahoo.com _______________________________________________
> Veritas-bu maillist  -  Veritas-bu AT mailman.eng.auburn DOT edu
> http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
> 
> This email is for the intended addressee only.
> If you have received it in error then you must not use, retain, 
> disseminate or otherwise deal with it. Please notify the sender by 
> return email. The views of the author may not necessarily constitute 
> the views of EADS Astrium Limited. Nothing in this email shall bind 
> EADS Astrium Limited in any contract
or
> obligation.
> 
> EADS Astrium Limited, Registered in England and Wales No. 2449259 
> Registered Office: Gunnels Wood Road, Stevenage, Hertfordshire, SG1 
> 2AS, England
> 


__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

This email is for the intended addressee only.
If you have received it in error then you must not use, retain, disseminate
or otherwise deal with it. Please notify the sender by return email. The
views of the author may not necessarily constitute the views of EADS Astrium
Limited. Nothing in this email shall bind EADS Astrium Limited in any
contract or obligation.

EADS Astrium Limited, Registered in England and Wales No. 2449259 Registered
Office: Gunnels Wood Road, Stevenage, Hertfordshire, SG1 2AS, England
_______________________________________________
Veritas-bu maillist  -  Veritas-bu AT mailman.eng.auburn DOT edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu

This email is for the intended addressee only.
If you have received it in error then you must not use, retain, disseminate or 
otherwise deal with it.
Please notify the sender by return email.
The views of the author may not necessarily constitute the views of EADS 
Astrium Limited.
Nothing in this email shall bind EADS Astrium Limited in any contract or 
obligation.

EADS Astrium Limited, Registered in England and Wales No. 2449259
Registered Office: Gunnels Wood Road, Stevenage, Hertfordshire, SG1 2AS, England

<Prev in Thread] Current Thread [Next in Thread>