ADSM-L

Re: [ADSM-L] Copypool using more tapes then primary tapepool

2009-04-17 16:47:28
Subject: Re: [ADSM-L] Copypool using more tapes then primary tapepool
From: Kelly Lipp <lipp AT STORSERVER DOT COM>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Fri, 17 Apr 2009 14:45:45 -0600
Do you manually kick reclamation on the copy pool? I noticed that the 
reclamation threshold on that pool is set to 100%.

It isn't unusual for reclamation processing to be somewhat off among pools.  
Generally, the copy pool will be slightly larger since reclamation isn't as 
aggressive and you will always have more partially empty tapes in that pool (if 
they are being removed from the library) than you will have in the primary pool.

6-9 tapes I wouldn't worry.  Hundreds?  Then something is probably wrong.

That said, you clearly need something better to do!  Isn't it amazing how easy 
it is to run a TSM environment?  The stuff that used to be hard is easy so you 
can start focus on other things!

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com

-----Original Message-----
From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On Behalf Of 
Larry Peifer
Sent: Friday, April 17, 2009 2:39 PM
To: ADSM-L AT VM.MARIST DOT EDU
Subject: [ADSM-L] Copypool using more tapes then primary tapepool

Why are we using more tapes in the copypool library vs the primary tape
library?

There is a 6 - 9 tape difference between the copypool and the primary tape
pool.  We average ~500 GB per tape so that's 1.5 - 4.5 TB of data.  It
doesn't seem like there should be that much of a discrepancy.  There is
both backup data and archive data mixed on the tapes and the DbBackups are
taken into account.

We have 2 identically configured IBM 3584 tape libraries.

On a daily basis our disk pools are migrated (migrate stgpool diskpool
lo=0) to the primary tape pool.

Then a daily schedule (backup stgpool tapepool6 tapepool7 maxprocess=4) is
run to keep everything equal between the 2 tape libraries.

Daily expiration and reclamation processes finish fine.

Schedules report successful completion daily.

Running TSM Server 5.4 with AIX 5.3 on p520 server.  LTO2 tapes with HW
compression

Storage Pool configurations:

Storage Pool Name: DISKPOOL
Storage Pool Type: Primary
Device Class Name: DISK
Estimated Capacity: 2,400 G
Space Trigger Util: 0.4
Pct Util: 0.4
Pct Migr: 0.4
Pct Logical: 100.0
High Mig Pct: 90
Low Mig Pct: 70
Migration Delay: 0
Migration Continue: Yes
Migration Processes: 4
Reclamation Processes:
Next Storage Pool: TAPEPOOL6
Reclaim Storage Pool:
Maximum Size Threshold: No Limit
Access: Read/Write
Description: Main Disk Storage Pool
Overflow Location:
Cache Migrated Files?: No
Collocate?:
Reclamation Threshold:
Offsite Reclamation Limit:
Maximum Scratch Volumes Allowed:
Number of Scratch Volumes Used:
Delay Period for Volume Reuse:
Migration in Progress?: No
Amount Migrated (MB): 1,235,496.70
Elapsed Migration Time (seconds): 9,284
Reclamation in Progress?:
Last Update by (administrator): admin
Last Update Date/Time: 08/24/07   09:50:37
Storage Pool Data Format: Native
Copy Storage Pool(s):
Active Data Pool(s):
Continue Copy on Error?: Yes
CRC Data: No
Reclamation Type:
Overwrite Data when Deleted:

Storage Pool Name: TAPEPOOL6
Storage Pool Type: Primary
Device Class Name: LTOCLASS6
Estimated Capacity: 121,841 G
Space Trigger Util:
Pct Util: 32.9
Pct Migr: 47.0
Pct Logical: 99.3
High Mig Pct: 90
Low Mig Pct: 70
Migration Delay: 0
Migration Continue: Yes
Migration Processes: 2
Reclamation Processes: 2
Next Storage Pool:
Reclaim Storage Pool:
Maximum Size Threshold: No Limit
Access: Read/Write
Description: Primary Sequential Tape
Overflow Location:
Cache Migrated Files?:
Collocate?: No
Reclamation Threshold: 100
Offsite Reclamation Limit:
Maximum Scratch Volumes Allowed: 300
Number of Scratch Volumes Used: 152
Delay Period for Volume Reuse: 3 Day(s)
Migration in Progress?: No
Amount Migrated (MB): 0.00
Elapsed Migration Time (seconds): 0
Reclamation in Progress?: No
Last Update by (administrator): admin
Last Update Date/Time: 04/07/09   14:06:34
Storage Pool Data Format: Native
Copy Storage Pool(s):
Active Data Pool(s):
Continue Copy on Error?: Yes
CRC Data: Yes
Reclamation Type: Threshold
Overwrite Data when Deleted:

Storage Pool Name: TAPEPOOL7
Storage Pool Type: Copy
Device Class Name: LTOCLASS7
Estimated Capacity: 120,330 G
Space Trigger Util:
Pct Util: 32.3
Pct Migr:
Pct Logical: 99.3
High Mig Pct:
Low Mig Pct:
Migration Delay:
Migration Continue: Yes
Migration Processes:
Reclamation Processes: 2
Next Storage Pool:
Reclaim Storage Pool:
Maximum Size Threshold:
Access: Read/Write
Description: Copy Pool
Overflow Location:
Cache Migrated Files?:
Collocate?: No
Reclamation Threshold: 100
Offsite Reclamation Limit: No Limit
Maximum Scratch Volumes Allowed: 300
Number of Scratch Volumes Used: 157
Delay Period for Volume Reuse: 3 Day(s)
Migration in Progress?:
Amount Migrated (MB):
Elapsed Migration Time (seconds):
Reclamation in Progress?: Yes
Last Update by (administrator): admin
Last Update Date/Time: 12/14/07   13:56:37
Storage Pool Data Format: Native
Copy Storage Pool(s):
Active Data Pool(s):
Continue Copy on Error?:
CRC Data: No
Reclamation Type: Threshold
Overwrite Data when Deleted:

=================================
DEVCLASS Configuration:

Device Class Name: LTOCLASS6
Device Access Strategy: Sequential
Storage Pool Count: 1
Device Type: LTO
Format: ULTRIUM2C
Est/Max Capacity (MB):
Mount Limit: DRIVES
Mount Wait (min): 10
Mount Retention (min): 5
Label Prefix: ADSM
Library: LTOLIB6
Directory:
Server Name:
Retry Period:
Retry Interval:
Shared:
High-level Address:
Minimum Capacity:
WORM: No
Drive Encryption:
Scaled Capacity:
Last Update by (administrator): admin

Device Class Name: LTOCLASS7
Device Access Strategy: Sequential
Storage Pool Count: 1
Device Type: LTO
Format: ULTRIUM2C
Est/Max Capacity (MB):
Mount Limit: DRIVES
Mount Wait (min): 10
Mount Retention (min): 5
Label Prefix: ADSM
Library: LTOLIB7
Directory:
Server Name:
Retry Period:
Retry Interval:
Shared:
High-level Address:
Minimum Capacity:
WORM: No
Drive Encryption:
Scaled Capacity:
Last Update by (administrator): admin