ADSM-L

Re: A requirement ..

2006-01-11 12:33:21
Subject: Re: A requirement ..
From: "Mark D. Rodriguez" <mark AT MDRCONSULT DOT COM>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Wed, 11 Jan 2006 11:30:16 -0600
David Sniper Rigaudiere wrote:

1. Files a, b, c are all created & backed up on day 1
2. File a gets deleted on day 5
3. File b gets deleted on day 25
4. File c gets deleted on day 49

They want a, b, c all to expire from TSM on day 50, regardless of when
they were deleted (which as far as I can see, can only be accomplished
by using archive, leading to 15GBx50day=750GB stgpool).  Of course,
nowadays, 750GB is not that big a deal, so maybe this is an ok solution
for them.



Maybee a script which browse fs and produce a file based on file's creation
date then archive these files with -filelist option.

David "Sniper" Rigaudiere



I was thinking along the same line as David.  If this is a Unix/Linux
client, then a you could use the find command to find all files newer
than the previous backup date  Then you could archive only those files
for the 50 days..  It would look something like this:

find  //fs/  -cnewer /archive_timestamp.file /> /file_list
/dsmc archive -filelist=/file_list/
touch  /file_list

/Obviously, the exact syntax on the find command may differ depending on
your version of Unix/Linux and you may want to add additional parameters
to the archive command, but I think you can get the idea from here.

Good Luck with it and let s know how you work it out.

--
Regards,
Mark D. Rodriguez
President MDR Consulting, Inc.

===============================================================================
MDR Consulting & Education
The very best in Technical Training and Consulting.
IBM Premier Business Partner, VMware Enterprise VIP Reseller
Certified Consultants and Instructors Supporting:
AIX, Linux, Windows, Tivoli, Lotus, WebSphere, DB2, VMware
===============================================================================

<Prev in Thread] Current Thread [Next in Thread>