Results 1 to 4 of 4
  1. #1
    Newcomer
    Join Date
    Jun 2007
    Posts
    257
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Default TSM & Data Deduplication

    Hi All,

    I see there are TSMer's out there backing up data to there primary disk storage pools and then offloading to de-dupe devices after the backup cycle is complete.

    I'm looking to see how they've implemented this from a logistics view point. i.e. is the next storage pool on the other side of the de-dupe device? Is the copy storage pool on the other side of the de-dupe device?

    Does IBM have any recommendations with respect to TSM and De-duplication?

    Any information that anyone can provide, or point me toward documentation, would be greatly appreciated.

    Thx.

  2. #2
    Member
    Join Date
    Jun 2007
    Location
    Johannesburg
    Posts
    53
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Default

    Deduplication is to be added into TSM 6.1 currently due for release in 4Q '08.

    There should be some info out there on the IBM Web Site or ask you Tivoli Pre-sales guys, as I know they have the presentations.

  3. #3
    Member
    Join Date
    Dec 2006
    Location
    Netherlands
    Posts
    44
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Default

    Quote Originally Posted by Leigh View Post
    Deduplication is to be added into TSM 6.1 currently due for release in 4Q '08.

    There should be some info out there on the IBM Web Site or ask you Tivoli Pre-sales guys, as I know they have the presentations.
    The kind of du-duplication Leigh is talking about is file level de-duplication. Dedicated de-duplication devices do block level de-duplication.

    I think the most efficient way would be to de-duplicate your data before sending it to a remote location.

  4. #4
    Member
    Join Date
    Jun 2005
    Posts
    12
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Default

    I'll pose the same Questions in this thread as i have in the others regarding deduplication.

    Have any of you gone to a DR Test with deduplicated data and have you suffered any delays when multiple clients are pulling and waiting on the same piece of deduped data? Also has anyone experienced delays when backup stgpool processes are running with deduped data? I cant get get any of the vendors to give me a straight answer so for now dedupe is a no go. The most important aspect of the systems I am architecting are for operational as well as disaster recovery. I certainly dont want to buy into this hype only to find out my backup improvements killed me at a DR. Any help is appreciated Thanks!
    Todd Blacet

Similar Threads

  1. Data Deduplication in your environment for over 6 months
    By Frunkster in forum Tape / Media Library
    Replies: 4
    Last Post: 03-27-2009, 02:39 PM
  2. Dilligent deduplication
    By denisl in forum Tape / Media Library
    Replies: 2
    Last Post: 07-30-2008, 10:32 PM
  3. Deduplication ratios
    By denisl in forum Tape / Media Library
    Replies: 0
    Last Post: 05-13-2008, 09:14 PM
  4. VTL and Data Deduplication
    By rowl in forum VTL - Virtual Tape Library
    Replies: 0
    Last Post: 06-28-2007, 04:38 PM
  5. Replies: 7
    Last Post: 06-26-2007, 08:27 AM

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •