Thank Ronny,
Here is what I have:
I have 5 adv_lunx 1 to 5, and 2 tapes drives.
Staging policy 80% (high water) - 65% (low water).
When one adv_lun reach 80% the staging to one tape drive start, when a second
adv_lun reach 80% the second drive is used, when a third adv_lun reach 80%, I
got an alert, saying "waiting for an tape or disk for Default Clone pool".
So, as the writing to tape is slower than the writing to disk, I got another
drive reaching 80%, before one of the two drive are freed.
-----Message d'origine-----
De : Ronny Egner [mailto:RonnyEgner AT gmx DOT de]
Envoyé : mercredi, 28. avril 2010 14:57
À : EMC NetWorker discussion; Hirter Marcel
Objet : Re: [Networker] staging parralelism
-------- Original-Nachricht --------
> Datum: Wed, 28 Apr 2010 03:01:40 -0400
> Von: Marcel Hirter <marcel.hirter AT NE DOT CH>
> An: NETWORKER AT LISTSERV.TEMPLE DOT EDU
> Betreff: [Networker] staging parralelism
> Hi ,
> I have a problem with staging to tape:
> I am staging adv_files to tape, and i only get one session per tape drive,
> Is it possible to have to or three adv_files staging to one tape ?
> If yes How ?
I did this with a tick:
Assume you have a large device /nsr/stage of 10 TB size.
Normally you configure one aft_device pointing to /nsr/stage thus getting a
maximum of one staging process at a time.
The tick is to create withing /nsr/stage some directories:
/nsr/stage/disk1
/nsr/stage/disk2
/nsr/stage/disk3
And create three adv_devices (for /nsr/stage/disk1, /nsr/stage/disk2
and /nsr/stage/disk3).
>From the networker perspective you have three (6 with read only) devices
>allowing a total of three staging processes in parallel. The disk space itself
>is shared; but in most scenarios this dont matter.
Yours sincerely
Ronny Egner
--
Ronny Egner
RonnyEgner AT gmx DOT de
To sign off this list, send email to listserv AT listserv.temple DOT edu and
type "signoff networker" in the body of the email. Please write to
networker-request AT listserv.temple DOT edu if you have any problems with this
list. You can access the archives at
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
|