Backing up from a primary disk pool to a primary file pool with dedup?

foobar2devnull

ADSM.ORG Member
Joined
Nov 30, 2010
Messages
122
Reaction score
1
Points
0
Location
Belgium
Hi all,

I am trying to setup a new TSM server instance. This is my first shot at it and I think I'm doing something wrong.

Code:
nodes --> PrimaryDisk OS (PD01) +-> Primary Tape (PT01) --> CopyTape (CT01)
                                |
                                +-> Primary File (PF01)

The flow I had in mind, as described above, was this:

nodes backup to a primary disk pool. Once all backups are done, data will be backed up to the copy tape for offsite storage. The next step was to backup to the Primary file pool followed by a migration to the primary tape pool.

The problem I am having is trying to get data on the dedupplicated primary file pool. I can not figure out how this can be done. Note that this is not an active file pool. The idea is that clients restore from PF01 for speed. If this fails for whatever reson, they can still get their data from the PT01 or CT01.

Any help and pointers to documentation are welcome :)

Thanks!

The following is the setup of each pool if needed:

Primary disk pool
Code:
                    Storage Pool Name: PD01
                    Storage Pool Type: Primary
                    Device Class Name: DISK
                   Estimated Capacity: 191 G
                   Space Trigger Util: 11.6
                             Pct Util: 11.6
                             Pct Migr: 11.6
                          Pct Logical: 100.0
                         High Mig Pct: 90
                          Low Mig Pct: 40
                      Migration Delay: 0
                   Migration Continue: Yes
                  Migration Processes: 1
                Reclamation Processes: 
                    Next Storage Pool: PT01
                 Reclaim Storage Pool: 
               Maximum Size Threshold: No Limit
                               Access: Read/Write
                          Description: Primary OS disk pool
                    Overflow Location: 
                Cache Migrated Files?: No
                           Collocate?: 
                Reclamation Threshold: 
            Offsite Reclamation Limit: 
      Maximum Scratch Volumes Allowed: 
       Number of Scratch Volumes Used: 
        Delay Period for Volume Reuse: 
               Migration in Progress?: No
                 Amount Migrated (MB): 0.00
     Elapsed Migration Time (seconds): 0
             Reclamation in Progress?: 
       Last Update by (administrator): JDOE
                Last Update Date/Time: 10/15/2012 09:05:16
             Storage Pool Data Format: Native
                 Copy Storage Pool(s): 
                  Active Data Pool(s): 
              Continue Copy on Error?: Yes
                             CRC Data: No
                     Reclamation Type: 
          Overwrite Data when Deleted: 
                    Deduplicate Data?: No
 Processes For Identifying Duplicates: 
            Duplicate Data Not Stored: 
                       Auto-copy Mode: Client
Contains Data Deduplicated by Client?: No

Primary file pool
Code:
                    Storage Pool Name: PF01
                    Storage Pool Type: Primary
                    Device Class Name: FILECLASS
                   Estimated Capacity: 0.0 M
                   Space Trigger Util: 0.0
                             Pct Util: 0.0
                             Pct Migr: 100.0
                          Pct Logical: 0.0
                         High Mig Pct: 90
                          Low Mig Pct: 70
                      Migration Delay: 0
                   Migration Continue: Yes
                  Migration Processes: 1
                Reclamation Processes: 1
                    Next Storage Pool: 
                 Reclaim Storage Pool: 
               Maximum Size Threshold: No Limit
                               Access: Read/Write
                          Description: Primary Deduplication File Pool
                    Overflow Location: 
                Cache Migrated Files?: 
                           Collocate?: Group
                Reclamation Threshold: 60
            Offsite Reclamation Limit: 
      Maximum Scratch Volumes Allowed: 20
       Number of Scratch Volumes Used: 0
        Delay Period for Volume Reuse: 0 Day(s)
               Migration in Progress?: No
                 Amount Migrated (MB): 0.00
     Elapsed Migration Time (seconds): 0
             Reclamation in Progress?: No
       Last Update by (administrator): JDOE
                Last Update Date/Time: 10/12/2012 09:58:17
             Storage Pool Data Format: Native
                 Copy Storage Pool(s): 
                  Active Data Pool(s): 
              Continue Copy on Error?: Yes
                             CRC Data: No
                     Reclamation Type: Threshold
          Overwrite Data when Deleted: 
                    Deduplicate Data?: Yes
 Processes For Identifying Duplicates: 2
            Duplicate Data Not Stored: 0  (0%)
                       Auto-copy Mode: Client
Contains Data Deduplicated by Client?: No

Primary Tape
Code:
                    Storage Pool Name: PT01
                    Storage Pool Type: Primary
                    Device Class Name: LTO5
                   Estimated Capacity: 292,969 G
                   Space Trigger Util: 
                             Pct Util: 0.0
                             Pct Migr: 1.0
                          Pct Logical: 100.0
                         High Mig Pct: 90
                          Low Mig Pct: 70
                      Migration Delay: 0
                   Migration Continue: Yes
                  Migration Processes: 1
                Reclamation Processes: 1
                    Next Storage Pool: 
                 Reclaim Storage Pool: 
               Maximum Size Threshold: No Limit
                               Access: Read/Write
                          Description: Primary tape pool for OS
                    Overflow Location: 
                Cache Migrated Files?: 
                           Collocate?: Group
                Reclamation Threshold: 70
            Offsite Reclamation Limit: 
      Maximum Scratch Volumes Allowed: 100
       Number of Scratch Volumes Used: 1
        Delay Period for Volume Reuse: 8 Day(s)
               Migration in Progress?: No
                 Amount Migrated (MB): 0.00
     Elapsed Migration Time (seconds): 0
             Reclamation in Progress?: No
       Last Update by (administrator): JDOE
                Last Update Date/Time: 10/12/2012 11:06:26
             Storage Pool Data Format: Native
                 Copy Storage Pool(s): 
                  Active Data Pool(s): 
              Continue Copy on Error?: Yes
                             CRC Data: No
                     Reclamation Type: Threshold
          Overwrite Data when Deleted: 
                    Deduplicate Data?: No
 Processes For Identifying Duplicates: 
            Duplicate Data Not Stored: 
                       Auto-copy Mode: Client
Contains Data Deduplicated by Client?: No
Copy Tape
Code:
                    Storage Pool Name: CT01
                    Storage Pool Type: Copy
                    Device Class Name: LTO5
                   Estimated Capacity: 292,969 G
                   Space Trigger Util: 
                             Pct Util: 0.0
                             Pct Migr: 
                          Pct Logical: 100.0
                         High Mig Pct: 
                          Low Mig Pct: 
                      Migration Delay: 
                   Migration Continue: 
                  Migration Processes: 
                Reclamation Processes: 1
                    Next Storage Pool: 
                 Reclaim Storage Pool: 
               Maximum Size Threshold: 
                               Access: Read/Write
                          Description: Copy tape pool for files and DBs
                    Overflow Location: 
                Cache Migrated Files?: 
                           Collocate?: No
                Reclamation Threshold: 70
            Offsite Reclamation Limit: No Limit
      Maximum Scratch Volumes Allowed: 100
       Number of Scratch Volumes Used: 1
        Delay Period for Volume Reuse: 8 Day(s)
               Migration in Progress?: 
                 Amount Migrated (MB): 
     Elapsed Migration Time (seconds): 
             Reclamation in Progress?: No
       Last Update by (administrator): JDOE
                Last Update Date/Time: 10/11/2012 09:52:36
             Storage Pool Data Format: Native
                 Copy Storage Pool(s): 
                  Active Data Pool(s): 
              Continue Copy on Error?: 
                             CRC Data: No
                     Reclamation Type: Threshold
          Overwrite Data when Deleted: 
                    Deduplicate Data?: No
 Processes For Identifying Duplicates: 
            Duplicate Data Not Stored: 
                       Auto-copy Mode: 
Contains Data Deduplicated by Client?: No
 
you can not migrate to more than one primary storagepool - you can only make serveral copies ( CopyStoragePool or ActiveDataPool)
if you have enough space in your primary disk pool, leave the migrated files there for fast restores: option CACHE=YES
 
Hi inestler,

Thank you for your help. I realise I can't migrate to multiple pools. My ill written question was more like: "How do I use a "dedupped primary file pool" as a primary pool for quick restores and avoid restores starting on the primary tape pool.

Now that I formulate the question differently, I realise that two "copy tape pools" might be a better solution where one stays local and another goes off-site.

What are your thoughts on that? ;)
 
if you want a seperate pool for quick restore, i would prefer an active-data pool. this can be a "deduplicated file pool"
 
Correct me if I'm wrong but an active-data pool will only hold the "current data" (active data) and not the entire backup retention like you'd find on tapes. If so, the active-data pool is not what we are after. We have SLAs that require fast restores of data at any point in time.
 
Hi,

I ended up changing the way I structured the volume pools.

Code:
nodes --> PrimaryDisk OS (PD01) +-> Primary File (PF01)
                                |
                                +---> Copy Tape (CT01)
                                |
                                +---> Copy Tape (CT02) off-site

I now backup to the Copy tapes and then migrate to the primary file pool. This way, the primary restore will come from PF01.

If anything sees something wrong with this setup, do let me know... I am, after all, still learning. ;)

Thanks for your help.
 
you can do this way - what kind of storage do you plan for PF01? any appliance, data domain etc. or local disk?
 
Well, PF01 will be a pool of 10K disks from SAN across fibre. I did a test and it all seems to work as expected. The data will mostly be a bunch of OSs (Linux and Windows) so dedup should work a treat.

Thank you for your help.
 
Back
Top