Networker

Re: [Networker] Disk Staging on storage node with adv_file option

2006-10-24 09:08:49
Subject: Re: [Networker] Disk Staging on storage node with adv_file option
From: "Wood, R A (Bob)" <WoodR AT CHEVRON DOT COM>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Tue, 24 Oct 2006 13:48:30 +0100
First of all, I'd like to reassure you that it is perfectly possible to
do what you want with disk devices on storage nodes (we do it ourselves
without issues).

All you have to do is think it through. I find a diagram and lines
showing data flow helps quite a lot.

We use pools to separate the data, 
storage_node_a_disk     the pool for storage node A's disk devices
storage_node_a_default  real tapes (for when we stage off disk after a
week or so)
storage_node_a_clone    for making offsite clones

A set for each storage node

Set up the disk devices on the storage node, use pool selection to
restrict this pool to those drives and for data from the storage node
Disk backup license is tiered by total data size not on number of
individual disk devices so you can more than one on each storage node.

The other 2 pools can be set to clone pools (as we don't want to backup
directly from a client to those tapes)

As you've already had the backups going to the storage nodes you should
not need to change storage node afinity

So the data flow sequence is, storage node a backs up to disk device on
storage node a, data is then cloned from storage node a's disk device to
storage node a's clone pool. These are offsite tapes.
At this stage there is a copy on disk for restores.
At the end of the week (or how ever long you wish to keep data on disk
for restore) the data is staged from storage node a's disk device to
storage node a's default pool. These are retained for the retention
period.
Offsite tapes are cycled back into the library.

Of course, doing things this way mean you are handling data three times
but it does satisfy the need to have data offsite but also available for
restores.

The cloning and staging works better when scripted instead of leaving
Networker in charge (careful choice of start times for cloning and
staging can reduce device contention or even eliminate it altogether).


Regards
Bob

>-----Original Message-----
>From: EMC NetWorker discussion 
>[mailto:NETWORKER AT LISTSERV.TEMPLE DOT EDU] On Behalf Of Raghava Karu
>Sent: 23 October 2006 17:32
>To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
>Subject: [Networker] Disk Staging on storage node with adv_file option
>
>Hi All,
>I need to configure disk staging to our backup environment, 
>which is in LAN Free environment, with DDS option, one backup 
>server, 8 storage nodes .6 tape drives shared in between 
>backup server and 8 storage nodes. We are seeing tape struc 
>issues, clones are taking too much time. So we decide to use 
>disk staging option. But we are not sure whether we can attach 
>disk staging directly on storage node instead of backup 
>server. ( Disk(s node) -> Disk (s node) -> tape(s node) 
>                                                               
>                        -> clone(b server) For staging the 
>data from diskstaging to tape(resident pool) , do we need to 
>specify anywhere? I mean does it automatically one copy onto 
>clone pool and another one onto resident pool( move with 
>staging option).For restores, do we need to specify for using 
>resident pool?
>What are the main differences in between file vs adv_file? 
>Thanks in advance,
>Raghava
>
>To sign off this list, send email to 
>listserv AT listserv.temple DOT edu and type "signoff networker" in 
>the body of the email. Please write to 
>networker-request AT listserv.temple DOT edu if you have any problems 
>wit this list. You can access the archives at 
>http://listserv.temple.edu/archives/networker.html or via RSS 
>at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
>
>

To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the
body of the email. Please write to networker-request AT listserv.temple DOT edu 
if you have any problems
wit this list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

<Prev in Thread] Current Thread [Next in Thread>