Good point. Could you explain more on the clone process(the shed)?
Thanks
Ben
----- Original Message -----
From: Stuart Whitby <swhitby AT DATAPROTECTORS.CO DOT UK>
Date: Friday, September 22, 2006 4:49 am
Subject: Re: [Networker] Backup Pools
> The problem you'd have by going to multiple pools is that your
> single drives would be stuck with one pool anyway, and any
> overnight backups to the jukebox would have to wait for other
> clients to finish in order to change the tape so that they can use
> the one for their own pool. If you get a hung saveset on the first
> jukebox backup, you don't get any further jukebox backups that
> night as the tape will not be ejected and you have no spare
> capacity to load tapes for other pools.
>
> My personal preference is to keep pools as simple as possible and
> base this on retention policy. One pool for each retention period
> (and keep as few as reasonably possible) and one for indexes and
> bootstrap.
> If you're specifically looking at improving recovery times, I'd
> recommend getting a similar but seperate jukebox to clone to,
> preferably one which is based offsite (put it in a shed with power
> & a fibre connection out the back of the office). If you're
> looking at using all your drives to recover 3 systems at once, it's
> because you've had a site based failure, and the benefit of cloning
> in this way is that you have easily accessible offsite (or "out of
> office" with my shed suggestion) backups which are also based on
> saveset rather than tape. ie, you have one continuous saveset on
> tape which can just be streamed back to the server. You *can* do
> this to different tapes on a server by server basis, but that's
> scripted wizardry rather than default option.
>
> In this case, I'd rather make sure you get the backups and know
> that recoveries will happen eventually rather than run the risk of
> not getting backups but having great recovery times when you do.
>
> Cheers,
>
> Stuart.
>
> ________________________________
>
> From: EMC NetWorker discussion on behalf of Ben Harner
> Sent: Thu 21-Sep-06 19:46
> To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
> Subject: Re: [Networker] Backup Pools
>
>
>
> Well first we have a IBM LTO 3581 jukebox and a IBM LTO 3580 single
> drive. We 1000 mbps throughput. But the backup server only has a
> 100mbps card. There are 17 clients. At full backups 5 clients
> are at
> least 100gb - 200gb per client. And the others are between 30 and 60
> gb. 5 groups are single client and are the larger clients, 1 is
> doubleclient median in size and 1 group has 9 clients which are
> smaller save
> sets. 15 clients are using the jukebox and 2 are using the single
> drive. We have another single drive but that is used by a different
> department but would be accessible to us in a disaster recovery
> operation if need be which is why I was looking into breaking up are
> most important server into different pools as to aid in a quicker
> recovery. If more than one server was to crash we could recover 3
> servers simultaneously if we know for sure that the servers are on
> different tapes.
>
> Thanks for the help
>
> Ben
>
> Stan Horwitz wrote:
> > On Sep 21, 2006, at 1:50 PM - 9/21/06, Ben Harner wrote:
> >
> >> Thanks for the reply. So if I have a jukebox with 7 slots and
> say I
> >> have 4 different pools with 1 to 2 tapes for each pool you think
> >> there would be long waits between backups? Shouldn't it be only
> the>> wait of the jukebox changing the tape? Right now all session
> have to
> >> wait for the previous one to finish per tape drive so wouldn't that
> >> be the same either way? Right now I have 15 clients being
> backed up
> >> nightly to one tape drive using one pool. I am trying to get it
> so I
> >> can specify certain tapes for certain clients. I'm guessing the
> only>> way to do that is through pools. Is there other ways?
> >
> > Possibly. Its difficult to say without knowing how much data
> would go
> > to the individual devices and how fast the throughput is. Answering
> > your question would be easier if you explained what your hardware
> and> network environment is like and the amounts of data involved.
> >
> > To sign off this list, send email to listserv AT listserv.temple DOT edu
> and> type "signoff networker" in the
> > body of the email. Please write to
> > networker-request AT listserv.temple DOT edu if you have any problems
> > wit this list. You can access the archives at
> > http://listserv.temple.edu/archives/networker.html or
> > via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
>
> To sign off this list, send email to listserv AT listserv.temple DOT edu
> and type "signoff networker" in the
> body of the email. Please write to networker-
> request AT listserv.temple DOT edu if you have any problems
> wit this list. You can access the archives at
> http://listserv.temple.edu/archives/networker.html or
> via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
>
>
>
> To sign off this list, send email to listserv AT listserv.temple DOT edu
> and type "signoff networker" in the
> body of the email. Please write to networker-
> request AT listserv.temple DOT edu if you have any problems
> wit this list. You can access the archives at
> http://listserv.temple.edu/archives/networker.html or
> via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
>
To sign off this list, send email to listserv AT listserv.temple DOT edu and
type "signoff networker" in the
body of the email. Please write to networker-request AT listserv.temple DOT edu
if you have any problems
wit this list. You can access the archives at
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
|