ADSM-L

Re: [ADSM-L] TSM RFE regarding Litigation Hold

2013-05-07 15:39:53
Subject: Re: [ADSM-L] TSM RFE regarding Litigation Hold
From: "Vandeventer, Harold [BS]" <Harold.Vandeventer AT KS DOT GOV>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Tue, 7 May 2013 19:35:35 +0000
Great ideas Paul.... I'm preparing to build the alternate server without 
expiration approach as soon as I can scare up some resources.

I'll look at the alternate Domain approach also.



-----Original Message-----
From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On Behalf Of 
Paul Zarnowski
Sent: Tuesday, May 07, 2013 12:54 PM
To: ADSM-L AT VM.MARIST DOT EDU
Subject: Re: [ADSM-L] TSM RFE regarding Litigation Hold

We deal with a variety of types of litigation hold here, as well.  What you can 
do now, easily, is to setup a parallel policy domain (i.e., LITHOLD) that has 
all the same management classes, but different retention policy (i.e., retain 
forever).  Then, to avoid expiration you just have to do this:

UPDATE NODE nodename DOMAIN=LITHOLD

This works if you have all the same management classes defined in LITHOLD that 
you had defined in the original domain.  You can move the node back and forth 
between domains as needed.  If LITHOLD is missing a management class, then 
retention would be controlled by the "grace period" definitions of the domain - 
something you'll probably want to avoid.

No changes needed on the client side since you're not changing management class 
names, just their attributes.

If you have associated a schedule with the node, then you'll need to have 
copies of the schedules in LITHOLD and re-associate the node with the schedule 
in the LITHOLD domain (which can be defined the same).

We also deal with other types of litigation holds that require is to take a 
snapshot of the data.  For this, we simply export (a copy of) the node to 
another TSM server instance where expiration does not run or has no effect.

..Paul


At 05:05 PM 5/3/2013, Vandeventer, Harold [BS] wrote:
>To all...
>I created an RFE to affect File Spaces and Expiration.  The feature would 
>cause expiration processing to be skipped for a file space that has been 
>selected.
>
>It's RFE ID 33395 if you care to review and vote.
>
>Briefly, the idea is to immediately respond to a situation in which we cannot 
>allow Expiration Processing to delete information that would otherwise be 
>deleted.  This would be in response to a "Litigation Hold" demand from a legal 
>issue at hand.  I've had three LitHold events in the past 24 months; they're 
>not much fun and I'm not in the court room, just the TSM Server Admin.
>
>Allowing a "LitigationHold=Yes" would avoid expiration on the File Space.
>
>When the court case is lifted, simply revert to ""LitigationHold=No".  The 
>next Expiration process would then begin the delete process as is normal.
>
>The feature would avoid the complexity of assigning a "no expire" management 
>class to the node and trying to later revert to a more typical class.
>
>Please take a look at the RFE, and cast a vote if you feel it's a valuable 
>feature.
>
>Thanks.
>------------------------------------------------------------
>Harold Vandeventer
>Systems Programmer
>State of Kansas - Office of Information Technology Services STE 751-S
>910 SW Jackson
>(785) 296-0631
>
>
>[Confidentiality notice:]
>***********************************************************************
>This e-mail message, including attachments, if any, is intended for the 
>person or entity to which it is addressed and may contain confidential 
>or privileged information.  Any unauthorized review, use, or disclosure 
>is prohibited.  If you are not the intended recipient, please contact 
>the sender and destroy the original message, including all copies, 
>Thank you.
>***********************************************************************


--
Paul Zarnowski                            Ph: 607-255-4757
Manager of Storage Services               Fx: 607-255-8521
IT at Cornell / Infrastructure            Em: psz1 AT cornell DOT edu
719 Rhodes Hall, Ithaca, NY 14853-3801

<Prev in Thread] Current Thread [Next in Thread>