Networker

[Networker] RE : [Networker] Staging to Advanced File Type dev ices

2005-08-17 15:26:51
Subject: [Networker] RE : [Networker] Staging to Advanced File Type dev ices
From: Mark Davis <davism AT UWO DOT CA>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Wed, 17 Aug 2005 15:16:20 -0400
Thanks for the replies on this question. It sounds like multiple pools is the only way to manage multiple Adv_File devices effectively in a disk staging environment.

Regards,

Mark

Faidherbe, Thierry wrote:
If you want to work with multiple Adv_Files a time and if
Adv_Files are used as temp location before being staged
to another pool (tape based), to work with multiple disk devices
a time, you have to label them into different pools because backups
on disk device will not span accross different volumes. Then,
load-ballance your clients to use these different new pools.
Then, as staging is just moving data from a pool to another,
just stage from multiple file dev a time to one or more tape device
to one more "tape" pool.
The major advantages you will have are -more than 1 staging session a time (faster)
-you may proceed to a recover by stopping 1 stage without
blocking all staging process
-better disk usage and perf.
-less impact/damage in case of file system corruption for unstaged yet ssid.
HTH, Thierry

________________________________

De: Legato NetWorker discussion Date: mer. 17/08/2005 12:26
À: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Objet : Re: [Networker] Staging to Advanced File Type devices



Mark Davis wrote:

We have 10 TB of "Advanced File Type" disk used to stage our data. The
disk is broken up into 14 700 GB file systems. The problem we are having
is that NetWorker does not distribute the data evenly across these file
systems. When selecting a device for backup, NetWorker always seems to
pick the disk that has the most data. If one disk is at 8% full, and
another is at 75% full, it will use the one with 75%.

Of course this makes sense when backing up to tape, but causes a variety
of problems when using disk. For example, it makes it very difficult to
maintain a uniform retention period for the data on disk.

Has anyone else run into this problem, and is there a way to even out the
data distribution.

--
Note: To sign off this list, send a "signoff networker" command via email
to listserv AT listserv.temple DOT edu or visit the list's Web site at
http://listserv.temple.edu/archives/networker.html where you can
also view and post messages to the list. Questions regarding this list
should be sent to stan AT temple DOT edu
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=

--
Note: To sign off this list, send a "signoff networker" command via email
to listserv AT listserv.temple DOT edu or visit the list's Web site at
http://listserv.temple.edu/archives/networker.html where you can
also view and post messages to the list. Questions regarding this list
should be sent to stan AT temple DOT edu
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=

<Prev in Thread] Current Thread [Next in Thread>