Networker

Re: [Networker] Exchange Cluster Best Practice

2005-12-17 20:48:39
Subject: Re: [Networker] Exchange Cluster Best Practice
From: Scott Bingham <sfbing AT EARTHLINK DOT NET>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Sat, 17 Dec 2005 17:43:00 -0800
Hello Patrick,

The easiest approach, if you have not already done so, is to split your
Exchange databases into Storage Groups.  Each Exchange Server can have up to
four Storage Groups, and the Storage Groups can be backed up in parallel.
(Individual databases can also be backed up in parallel, but that is not
best practice for efficiency reasons.)

A more complex approach would be to move towards snapshot backups.
Snapshots can provide instantaneous backups, but involve specialized
hardware.

Thanks,
_Scott

-----Original Message-----
From: Legato NetWorker discussion [mailto:NETWORKER AT LISTSERV.TEMPLE DOT EDU] 
On
Behalf Of Howard, Patrick
Sent: Friday, December 16, 2005 12:54 PM
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Subject: [Networker] Exchange Cluster Best Practice

 

Greetings All,

 

I have an Exchange Cluster that I am backup up full every day (2), and
they are quite large, I think close to 600G.  Needless to say we are
running over 24 hours to get a good backup in.  Most of the time we have
to kill them due to Exchange performance issues.

 

Our backup rate is good, but simple mathematics will show with the
amount of Data we are backing up, it's going to take a long time.

 

Is there a way to "split" up our Exchange stores?  I KNOW we aren't the
only one with large Exchange Stores to back up out there, so our
situation isn't unique.  Suggestion on how to "speed" up the process?

 

 

Patrick Howard
Ann Taylor
476 Wheelers Farm Road
Milford, CT 06460
(203) 283-8772 direct
(646) 660-0426 mobile
(203) 878-3836 fax
Patrick_Howard AT anntaylor DOT com
www.anntaylor.com <http://www.anntaylor.com/> 

 


****************************************************************************
*******
The information in this email (including any attachments) is confidential
and may be legally privileged.  Access to this e-mail by anyone other than
the intended addressee is unauthorized.  If you are not the intended
recipient of this message, any review, disclosure, copying, distribution,
retention, or any action taken or omitted to be taken in reliance on it
(including any attachments) is prohibited and may be unlawful.  If you are
not the intended recipient, please reply to or forward a copy of this
message to the sender and delete the message, all attachments, and any
copies thereof from your system and destroy any printout thereof.

______________________________________________________________________
The information in this email (including any attachments) is confidential
and may be legally privileged. Access to this e-mail by anyone other than
the intended addressee is unauthorized. If you are not the intended
recipient of this message, any review, disclosure, copying, distribution,
retention, or any action taken or omitted to be taken in reliance on it
(including any attachments) is prohibited and may be unlawful. If you are
not the intended recipient, please reply to or forward a copy of this
message to the sender and delete the message, all attachments, and any
copies thereof from your system and destroy any printout thereof.

To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type
"signoff networker" in the
body of the email. Please write to networker-request AT listserv.temple DOT edu 
if
you have any problems
wit this list. You can access the archives at
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the
body of the email. Please write to networker-request AT listserv.temple DOT edu 
if you have any problems
wit this list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

<Prev in Thread] Current Thread [Next in Thread>