Veritas-bu

Re: [Veritas-bu] DD LSU question

2011-04-20 14:43:25
Subject: Re: [Veritas-bu] DD LSU question
From: "Mark Glazerman" <Mark.Glazerman AT spartech DOT com>
To: <VERITAS-BU AT MAILMAN.ENG.AUBURN DOT EDU>
Date: Wed, 20 Apr 2011 13:43:19 -0500
According to the OST / Boost admin guide 
(http://www.emc.com/collateral/software/white-papers/h7296-data-domain-boost-openstorage-wp.pdf)
 having multiple LSU's may impact some advanced Netbackup features like media 
server load balancing and capacity reporting.  We are seeing equally impressive 
compression numbers across our multiple LSU's as we were in our multiple 
directories under /backup before deploying OST.  I wouldn't be overly concerned 
unless your SE or support give you reason to be.

Mark Glazerman
Desk: 314-889-8282
Cell: 618-520-3401
 please don't print this e-mail unless you really need to


-----Original Message-----
From: veritas-bu-bounces AT mailman.eng.auburn DOT edu 
[mailto:veritas-bu-bounces AT mailman.eng.auburn DOT edu] On Behalf Of X_S
Sent: Wednesday, April 20, 2011 1:15 PM
To: VERITAS-BU AT MAILMAN.ENG.AUBURN DOT EDU
Subject: [Veritas-bu] DD LSU question

thanks, we created multiple lsu's for the same exact reason to keep an eye on 
ratios of different os's and db's thinking that dedupe and compression would be 
done across all the data ingested so the existence of multiple lsu's wouldn't 
matter.   i'm hoping this is the case

+----------------------------------------------------------------------
|This was sent by xspak1 AT yahoo DOT com via Backup Central.
|Forward SPAM to abuse AT backupcentral DOT com.
+----------------------------------------------------------------------


_______________________________________________
Veritas-bu maillist  -  Veritas-bu AT mailman.eng.auburn DOT edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
_______________________________________________
Veritas-bu maillist  -  Veritas-bu AT mailman.eng.auburn DOT edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
<Prev in Thread] Current Thread [Next in Thread>