Networker

Re: [Networker] Backup Pools

2006-09-25 08:17:35
Subject: Re: [Networker] Backup Pools
From: Yaron Zabary <yaron AT ARISTO.TAU.AC DOT IL>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Mon, 25 Sep 2006 15:12:46 +0300
On Mon, 25 Sep 2006, Stuart Whitby wrote:

> Saveset chunks are written to the tape when a reasonable sized chunk
> is ready to go (I'm sure there's a better technical explanation for
> how this is handled, but that works).What this results in is a tape
> which has savesets like:
> 
> |   A    |  B  |      C        |   A   |               B              |A|  C  
> |
> 
> To recover that data, you'd need to read chunks, position the tape,
> read more chunks, etc.When you clone those savesets to another tape,
> what you end up with is more like:
> 
> |        A            |                    B                  |          C    
>     |
> 
> because the drive has already done the recovery once and written that
> in full to another tape.Now, you not only have confirmed that the
> backup is good by confirming that the original can be read, you've
> helped speed up a recovery by giving the system a single chunk of data
> to read (subject to the write speed of the disks being anywhere close
> to the read speed of the tapes).By creating a copy, you've also got
> the possibility to recover at least 2 servers simultaneously going on
> your previous discussion about this.And if you can offsite the second
> jukebox, you've now got an offsite copy of the data in case of site
> disaster.  The only problem is to force NetWorker into choosing the
> right volume (between original and clone) for the recovery.

  The better approach, IMHO, will be to use Networker's DiskBackup option
and clone from the disk copy. This has the following advantages:

  . As long as the original is on the disk, you can restore from many
servers (not only two) and faster.

  . Both the original and the clone are continous on the tape.

  . The clone process is faster.


  Anyhow when you recover a large volume, my experience shows that the
bottleneck is always the file-system traversal of the target filesystem
and not the backup system. This is usually the problem with backups as
well.

  When you clone a saveset, Networker will still use the original copy and
you must mark the original as suspect for it to access the cloned copy
(this is on 7.2.2). Also, I think that there is a bug (or feature) which
causes both the original and the clone savesets to be marked as suspect,
so after you mark the original as suspect, you should mark the clone as
normal. This happened to me a couple of times in the past.

> 
> Otherwise, man nsrclone should tell you all you need.
> 
> Cheers,
> 
> Stuart.
> 
> ________________________________
> 
> From: EMC NetWorker discussion on behalf of Ben Harner
> Sent: Sun 24-Sep-06 22:21
> To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
> Subject: Re: [Networker] Backup Pools
> 
> 
> 
> Good point.Could you explain more on the clone process(the shed)?
> 
> Thanks
> 
> Ben
> 
> ----- Original Message -----
> From: Stuart Whitby <swhitby AT DATAPROTECTORS.CO DOT UK>
> Date: Friday, September 22, 2006 4:49 am
> Subject: Re: [Networker] Backup Pools
> 
> > The problem you'd have by going to multiple pools is that your
> > single drives would be stuck with one pool anyway, and any
> > overnight backups to thejukebox would have to wait for other
> > clients to finish in order to change the tape so that they can use
> > the one for their own pool.If you get a hung saveset on the first
> > jukebox backup, you don't get any further jukebox backups that
> > night as the tape will not be ejected and you have no spare
> > capacity to load tapes for other pools.
> >
> > My personal preference is to keep pools as simple as possible and
> > base this on retention policy.One pool for each retention period
> > (and keep as few as reasonably possible) and one for indexes and
> > bootstrap.
> > If you're specifically looking at improving recovery times, I'd
> > recommend getting a similar but seperate jukebox to clone to,
> > preferably one which is based offsite (put it in a shedwith power
> > & a fibre connection out the back of the office).If you're
> > looking at using all your drives to recover 3 systems at once, it's
> > because you've had a site based failure, and the benefit of cloning
> > in this way is that you have easily accessible offsite (or "out of
> > office" with my shed suggestion) backups which are also based on
> > saveset rather than tape.ie, you have one continuous saveset on
> > tape which can just be streamed back to the server.You *can* do
> > this to different tapes on a server by server basis, but that's
> > scripted wizardry rather than default option.
> >
> > In this case, I'd rather make sure you get the backups and know
> > that recoveries will happen eventually rather than run the risk of
> > not gettingbackups but having great recovery times when you do.
> >
> > Cheers,
> >
> > Stuart.
> >
> > ________________________________
> >
> > From: EMC NetWorker discussion on behalf of Ben Harner
> > Sent: Thu 21-Sep-06 19:46
> > To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
> > Subject: Re: [Networker] Backup Pools
> >
> >
> >
> > Well first we have a IBM LTO 3581 jukebox and a IBM LTO 3580 single
> > drive.We 1000 mbps throughput.  But the backup server only has a
> > 100mbps card. There are 17 clients.  At full backups 5 clients
> > are at
> > least 100gb - 200gb per client.And the others are between 30 and 60
> > gb.5 groups are single client and are the larger clients, 1 is
> > doubleclient median in size and 1 group has 9 clients which are
> > smaller save
> > sets.15 clients are using the jukebox and 2 are using the single
> > drive.We have another single drive but that is used by a different
> > department but would be accessible to us in a disaster recovery
> > operation if need be which is why I was looking into breaking up are
> > most important server into different pools as to aid in a quicker
> > recovery.If more than one server was to crash we could recover 3
> > servers simultaneously if we know for sure that the servers are on
> > different tapes.
> >
> > Thanks for the help
> >
> > Ben
> >
> > Stan Horwitz wrote:
> > > On Sep 21, 2006, at 1:50 PM - 9/21/06, Ben Harner wrote:
> > >
> > >> Thanks for the reply.So if I have a jukebox with 7 slots and
> > say I
> > >> have 4 different pools with 1 to 2tapes for each pool  you think
> > >> there would be long waits between backups?Shouldn't it be only
> > the>> wait of the jukebox changing the tape?Right now all session
> > have to
> > >> wait for the previous one to finish per tape drive so wouldn't that
> > >> be the same either way?Right now I have 15 clients being
> > backed up
> > >> nightly to one tape drive using one pool.I am trying to get it
> > so I
> > >> can specify certain tapes for certain clients.I'm guessing the
> > only>> way to do that is through pools.Is there other ways?
> > >
> > > Possibly. Its difficult to say without knowing how much data
> > would go
> > > to the individual devices and how fast the throughput is. Answering
> > > your question would be easier if you explained what your hardware
> > and> network environment is like and the amounts of data involved.
> > >
> > > To sign off this list, send email to listserv AT listserv.temple DOT edu
> > and> type "signoff networker" in the
> > > body of the email. Please write to
> > > networker-request AT listserv.temple DOT edu if you have any problems
> > > wit this list. You can access the archives at
> > > http://listserv.temple.edu/archives/networker.html or
> > > via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
> >
> > To sign off this list, send email to listserv AT listserv.temple DOT edu
> > and type "signoff networker" in the
> > body of the email. Please write to networker-
> > request AT listserv.temple DOT edu if you have any problems
> > wit this list. You can access the archives at
> > http://listserv.temple.edu/archives/networker.html or
> > via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
> >
> >
> >
> > To sign off this list, send email to listserv AT listserv.temple DOT edu
> > and type "signoff networker" in the
> > body of the email. Please write to networker-
> > request AT listserv.temple DOT edu if you have any problems
> > wit this list. You can access the archives at
> > http://listserv.temple.edu/archives/networker.html or
> > via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
> >
> 
> To sign off this list, send email to listserv AT listserv.temple DOT edu and 
> type "signoff networker" in the
> body of the email. Please write to networker-request AT listserv.temple DOT 
> edu if you have any problems
> wit this list. You can access the archives at 
> http://listserv.temple.edu/archives/networker.html or
> via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
> 
> 
> 
> To sign off this list, send email to listserv AT listserv.temple DOT edu and 
> type "signoff networker" in the
> body of the email. Please write to networker-request AT listserv.temple DOT 
> edu if you have any problems
> wit this list. You can access the archives at 
> http://listserv.temple.edu/archives/networker.html or
> via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
> 
> 
> 
> 
> 
> ************************************************************************************
> This footnote confirms that this email message has been scanned by
> PineApp Mail-SeCure for the presence of malicious code, vandals & computer 
> viruses.
> ************************************************************************************
> 
> 
> 


-- Yaron.

To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the
body of the email. Please write to networker-request AT listserv.temple DOT edu 
if you have any problems
wit this list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

<Prev in Thread] Current Thread [Next in Thread>