John,
We use the adv_disk option, there are a few things to bear in mind (not
really gotchas just something to be aware of).
When creating the device, give it a size (this will make future staging
much easier to control as NW will be able to calculate the % used when
comparing for high/low water marks).
The blurb about being able to read whilst writing is correct but they
fail to highlight the fact that you can only have 1 read going on at any one
time (you can only run 1 clone OR 1 stage OR 1 restore from the device, whether
it is writing or not).
Been playing with this for several months now and I'm still not sure
what the optimum size for such a device should be. On the one hand I can
increase my parallelism (with cloning and staging) by having many smaller disks
but, with other SAN devices being shared, I could run up against the max number
of devices (I hope to add another 10TB to the system soon so I'll have a play
then).
That's about it, basically it works and works well, the speed of
restores makes the extra data shuffling worth while (at the moment the data is
handled 3 times (from client to disk, cloned to tape for offsite and staged to
tape for longer term storage).
Regards
Bob
-----Original Message-----
From: Legato NetWorker discussion [mailto:NETWORKER AT LISTMAIL.TEMPLE DOT EDU]
On Behalf Of Ballinger, John M
Sent: 17 December 2004 18:33
To: NETWORKER AT LISTMAIL.TEMPLE DOT EDU
Subject: [Networker] adv_disk in 7.0+ - do's & don'ts
Is anyone using the new adv_disk option in v7.0+ ?
We are in a small way and for the most part like it.
Are there any do's and don'ts ?
When an adv_disk device runs out of space and NetWorker issues the following
"alert":
NetWorker adv_file: (alert) Waiting for more available space on
filesystem `/files3/_AF_readonly' what are your options ? Do you have to then
run an emergency staging operation moving some old savesets from adv_disk to
tape or will the system automatically do this right away. We have one storage
node with the filesystems below:
/dev/dsk/c7t1d0s2 1469862496 1452013040 17849456 99% /files1
/dev/dsk/c8t2d0s2 1715189024 1507888704 207300320 88% /files2
/dev/dsk/c5t3d0s2 1469862496 1329754792 125409080 92% /files3
/dev/dsk/c6t4d0s2 1715189024 1687194968 27994056 99% /files4
/dev/dsk/c4t0d0s2 1393771232 1234703336 145130184 90% /files5
/dev/dsk/c4t0d1s2 1194656408 948107136 234602712 81% /files6
And NetWorker showing:
Device Used %Used
..files1 2029GB 100%
..files2 1829GB 100%
..files3 1711GB 100%
..files4 2002GB 100%
..files5 1197GB 86%
..files6 936 GB 0% (Is my only option to fix this clearing
the volume and then labelling it again ?)
and we have a staging policy setup with:
High water mark 90%
Low water mark 70%
Saveset selection oldest saveset
Max storage period 100 Days
Recover space interval 8 Hours
Filesystem check interval 3 Hours
that's applied to all 6 adv_disk devices/volumes.
thanks - John
-----Original Message-----
From: Legato NetWorker discussion [mailto:NETWORKER AT LISTMAIL.TEMPLE DOT
EDU]On Behalf Of Terry Lemons
Sent: Tuesday, December 07, 2004 11:11 AM
To: NETWORKER AT LISTMAIL.TEMPLE DOT EDU
Subject: Re: [Networker] SnapImage for Solaris - any updates in sight?
Hi Oscar
Check out the "LEGATO NetWorker, Release 7.2, Release Supplement, UNIX Version"
document on http://www.legato.com/resources/manuals. The bottom of p. 18 talks
about a new V7.2 feature called "data server agent" (DSA). Basically, I believe
that DSA is SnapImage embedded into the standard NetWorker distribution. I'm
not sure if it is supported on every NetWorker platform, but I don't read any
restrictions in the UNIX version, so I'll ASSume that all NetWorker UNIX
versions have this feature.
So, I believe this is the SnapImage 'update' that you were asking about!
Hope this helps.
tl
Terry Lemons
CLARiiON Applications Integration Engineering
EMC²
where information lives
4400 Computer Drive, MS D239
Westboro MA 01580
Phone: 508 898 7312
Email: Lemons_Terry AT emc DOT com
-----Original Message-----
From: Legato NetWorker discussion [mailto:NETWORKER AT LISTMAIL.TEMPLE DOT EDU]
On Behalf Of Oscar Olsson
Sent: Tuesday, December 07, 2004 12:47 PM
To: NETWORKER AT LISTMAIL.TEMPLE DOT EDU
Subject: [Networker] SnapImage for Solaris - any updates in sight?
It has been a while since SnapImage 1.6 was released for Solaris. It has its
bugs, for instance problems with DAR recovers, especially large ones, and it
lacks NDMP version 4 support, plus some other new features that have been
introduced in, for instance, ONTAP.
So, will there be any update? It feels like there's much to be wished for in
the next release? Any rumors/gossip at least? :)
//Oscar
--
Note: To sign off this list, send a "signoff networker" command via email to
listserv AT listmail.temple DOT edu or visit the list's Web site at
http://listmail.temple.edu/archives/networker.html where you can also view and
post messages to the list. Questions regarding this list should be sent to stan
AT temple DOT edu
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=
--
Note: To sign off this list, send a "signoff networker" command via email to
listserv AT listmail.temple DOT edu or visit the list's Web site at
http://listmail.temple.edu/archives/networker.html where you can also view and
post messages to the list. Questions regarding this list should be sent to stan
AT temple DOT edu
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=
--
Note: To sign off this list, send a "signoff networker" command via email to
listserv AT listmail.temple DOT edu or visit the list's Web site at
http://listmail.temple.edu/archives/networker.html where you can also view and
post messages to the list. Questions regarding this list should be sent to stan
AT temple DOT edu
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=
--
Note: To sign off this list, send a "signoff networker" command via email
to listserv AT listmail.temple DOT edu or visit the list's Web site at
http://listmail.temple.edu/archives/networker.html where you can
also view and post messages to the list. Questions regarding this list
should be sent to stan AT temple DOT edu
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=
|