Hi All,
Got a question here about savegrp parallelism and target sessions.
NW: 6.1.3
OS: Solaris 8
HW: E-450
I think of 'target sessions' as a "desired minimum". ie. I want AT LEAST X
sessions writing to this drive before I'll request another drive.
I think of 'Savegrp Parallelism' as an absolute max. ie. I want no more than X
sessions running under this group.
I think of 'Client Parallelism' as an absolute max. ie. I want no more than X
sessions originating from this client.
I think if 'Server Parallelism' as an absolute max. ie. I can not accept more
than X sessions total for the entire data zone.
Are the above statements correct?
If I BU'd one client (savegrp -v -c tbox-app Production) w/in a group
consisting of 45 clients using the following settings and h/w:
9840 Drives: 3 on the server; 2 on a SAN storage node
9840 Target Sessions: 4
Loaded/Mounted tapes in 9840 Drives: 0
Server Parallelism: 32
Savegrp Parallelism: 4
Client Parallelism: 4
Client Save Set: All
Client Filesystems: /, /nicapp, /boot, /dev/pts
BTW, 'tbox-app' is just a regular client - not a storage node of any type.
This is how NW responded:
Media Alert: Waiting for 4 tapes for pool 'Full'
** This is odd, it should know that only 3 drives can possibly be used, yet
it's requesting 4 tapes.
It subsequently loaded all three 9840's with tapes and wrote ONE savestream to
each drive.
** I expected NW to load one tape and write save What the heck is going on
here? Why isn't 'target sessions' being used here?
There is no difference from above if I use 'savegrp -N 4 -v -c tbox-app
Production' (added '-N 4'). If I used '-N 2' two drives were loaded and
written to w/ ONE savestream to each drive.
Now, if any of the drives already had a volume loaded/mounted (no matter if
it's one drive or all 3 drives) and I issue 'savegrp -v -c tbox-app
Production', then all 4 streams go to ONE drive. This is the expected
behavior.
Is this is known issue and is there a solution?
Thanks,
/\/elson
--
~~ ** ~~ If you didn't learn anything when you broke it the 1st ~~ ** ~~
time, then break it again.
--
Note: To sign off this list, send a "signoff networker" command via email
to listserv AT listmail.temple DOT edu or visit the list's Web site at
http://listmail.temple.edu/archives/networker.html where you can
also view and post messages to the list.
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=
|