Results 1 to 7 of 7
  1. #1
    Newcomer
    Join Date
    Mar 2009
    Location
    Boston
    Posts
    3
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Default Exporting large backups

    We are in the final stages of migrating from 10 year old hardware running TSM 5.3.6 to new hardware running TSM 5.5.2. To accomplish this, we have been using "export node" to get the data to the new server. This has worked great for making the transition mostly seamless for the users and can be accomplished during the day while no backups are occurring.

    We are now left with four nodes that have very large backup sets (800GB, 900GB, 2TB, and 4.5TB). Unfortunately these are all Oracle TDP backups, with transaction log backups happening every 4 hours. I tested doing an export of the 2TB one and it took 18 hours. I then issued the command again with FROMDate and FROMTime parameters for the past 18 hours worth of data. This resulted in creating additional filespaces for each of the databases (/Database was recreated as /Database1). This doesn't look good, so we didn't cut over.

    Is anyone aware of a way to accomplish this migration, short of suspending the TDP processes during the migration? All data is housed on a Virtual Tape Library (EMC DL4200).

  2. #2
    Senior Member
    Join Date
    Dec 2009
    Location
    Sydney, Australia
    Posts
    384
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Default

    Did you try using "mergefilespaces=yes" during the import operation?

    T

  3. #3
    Newcomer
    Join Date
    Jun 2010
    Location
    Puerto Rico
    Posts
    7
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Default

    Make a full backup of data to directly from the 800Gb node to the old tsm's DISK pool and then you make an export node from old tsm server to the new one.
    The reason for this step because it is faster for the old server to read from the disk to read the cartridge, so the process takes less time.

    Use the export node with "Filedata=allactive merge=yes toserver=newtsmserver"

  4. #4
    Newcomer
    Join Date
    Mar 2009
    Location
    Boston
    Posts
    3
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Default Thanks

    Thanks guys. I was able to test it today and MERGE=yes solved my problem. I successfully migrated the 2TB node and my DBAs are happy.

  5. #5
    Member
    Join Date
    Oct 2010
    Posts
    66
    Thanks
    3
    Thanked 3 Times in 3 Posts

    Default

    With mergedata=yes, did I read that if you already have x amount of data transfered; then after the original node performs another nightly backup, you perform a 2nd export to get the remaining data transfered,
    that the mergedata=yes will be able to merge the data properly and handle all versioning and expiration associated with your backups on the new node?

    Unfortunately, I can't find where I read that about mergedata.

  6. #6
    Newcomer
    Join Date
    Mar 2009
    Location
    Boston
    Posts
    3
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Default

    Yes, if you don't specify merge=yes, you wind up with new filespaces, duplicating what already existed from the previous export/import. From "HELP EXPORT NODE":

    Code:
    MERGEfilespaces
         Specifies whether Tivoli Storage Manager merges client files into
         existing file spaces on the target server (if they exist), or if Tivoli
         Storage Manager generates new file space names. The default is NO.
    
         You can only specify this parameter if you have specified the TOSERVER
         parameter.
    
         Valid values are:
         Yes
              Specifies that imported data on the target server is merged with
              the existing file space, if a file space with the same name
              already exists on the target server.
         No
              Specifies that Tivoli Storage Manager generates a new file space
              name for imported data on the target server if file spaces with
              the same name already exist.
         Note:
              If an operation is interrupted and re-started, consider selecting
              YES so your data is not imported into a new file space.

  7. #7
    Member
    Join Date
    Oct 2010
    Posts
    66
    Thanks
    3
    Thanked 3 Times in 3 Posts

    Default

    Thanks! I did read and understand that, it just didn't specifically mention the versioning of backups being part of mergedata. I know I've read it somewhere, just cannot remember where.

    Thanks again.

Similar Threads

  1. Best approach for large 3+tb db backups
    By Jeff_Jeske in forum Oracle
    Replies: 10
    Last Post: 04-02-2010, 03:07 PM
  2. Windows 2008 large system backups
    By hogmaster in forum EMC NetWorker
    Replies: 3
    Last Post: 04-02-2010, 05:05 AM
  3. Large Filesystem backups-over 7 million files
    By jethro66 in forum TSM Client
    Replies: 11
    Last Post: 05-30-2008, 09:32 AM
  4. Large SAP DB backups ULTRA SLOW
    By MightyMouse in forum Performance Tuning
    Replies: 6
    Last Post: 12-19-2007, 05:25 PM
  5. Backups of large files hanging
    By brianl-cdn in forum Performance Tuning
    Replies: 2
    Last Post: 07-06-2007, 01:52 PM

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •