ADSM-L

Re: [ADSM-L] Fantasy TSM

2008-05-05 16:59:21
Subject: Re: [ADSM-L] Fantasy TSM
From: "Hart, Charles A" <charles_hart AT UHC DOT COM>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Mon, 5 May 2008 15:57:45 -0500
There's a new NDMP Option with TSM 5.4 where you can perform an IP based
backup to a TSM Storage Pool, which then should allow you to process as
any other TSM Data hitting a stgpool... 



-----Original Message-----
From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On Behalf Of
Curtis Preston
Sent: Monday, May 05, 2008 3:11 PM
To: ADSM-L AT VM.MARIST DOT EDU
Subject: Re: [ADSM-L] Fantasy TSM

> I'm wondering if TSM could take the NDMP dump and post-process it, to 
> carve out the changed files only and handle them according to regular
mgmt
> classes.

It is possible to do that, but I know of only one product that does it.
Avamar processes (inline, actually) the dump stream, figures out which
blocks are new and keeps them.  That means it's totally possible.

The question is whether or not IBM sees enough revenue from the NDMP
agent to do that much work on it.





This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they are
addressed. If you have received this email in error please notify the
system manager. This message contains confidential information and is
intended only for the individual named. If you are not the named
addressee you should not disseminate, distribute or copy this e-mail.


This e-mail, including attachments, may include confidential and/or 
proprietary information, and may be used only by the person or entity to 
which it is addressed. If the reader of this e-mail is not the intended 
recipient or his or her authorized agent, the reader is hereby notified 
that any dissemination, distribution or copying of this e-mail is 
prohibited. If you have received this e-mail in error, please notify the 
sender by replying to this message and delete this e-mail immediately.

<Prev in Thread] Current Thread [Next in Thread>