Migration causing storage pool size increase

mgonline

ADSM.ORG Member
Joined
Jun 29, 2007
Messages
8
Reaction score
0
Points
0
Hi,

I have two disk pools. Recovery_Pool and recovery_File.
Recovery_Pool migrates to Recovery_File. However when I ran migration the utlization percentage of Recovery_pool does not change but the Estimated Capacity of the Recovery_file is increasing. I am unable carry out migration as the filesystem is getting full with the Recovery_file pool adding new volumes whenever i run it.


BEFORE MIGRATION STARTED on RECOVERY_POOL
adsm> q stgp
Storage Device Estimated Pct Pct High Low Next
Pool Name Class Name Capacity Util Migr Mig Mig Storage
Pct Pct Pool
----------- ---------- ---------- ----- ----- ---- --- -----------
RECOVERY_FILE RECOVERY 11,979 G 1.6 1.8 100 99

RECOVERY_POOL DISK 1,875 G 94.0 94.0 100 99 RECOVERY_FILE


AFTER MIGRATION STARTED on RECOVERY_POOL
adsm> Q STGP
Storage Device Estimated Pct Pct High Low Next
Pool Name Class Name Capacity Util Migr Mig Mig Storage
Pct Pct Pool
----------- ---------- ---------- ----- ----- ---- --- -----------
RECOVERY_FILE RECOVERY 12,973 G 1.5 1.9 100 99

RECOVERY_POOL DISK 1,875 G 94.0 94.0 100 99 RECOVERY_FILE

Details of the Recovery_file

adsm> q stgp recovery_file f=d
Storage Pool Name: RECOVERY_FILE
Storage Pool Type: Primary
Device Class Name: RECOVERY
Estimated Capacity: 13,025 G
Space Trigger Util: 79.3
Pct Util: 1.5
Pct Migr: 1.9
Pct Logical: 68.2
High Mig Pct: 100
Low Mig Pct: 99
Migration Delay: 0
Migration Continue: Yes
Migration Processes: 1
Reclamation Processes: 1
Next Storage Pool:
Reclaim Storage Pool:
Maximum Size Threshold: No Limit
Access: Read/Write
Description: Recovery Pool utilising device class
of file
Overflow Location:
Cache Migrated Files?:
Collocate?: No
Reclamation Threshold: 60
Offsite Reclamation Limit:
Maximum Scratch Volumes Allowed: 5,000
Number of Scratch Volumes Used: 96
Delay Period for Volume Reuse: 1 Day(s)
Migration in Progress?: No
Amount Migrated (MB): 0.00
Elapsed Migration Time (seconds): 0
Reclamation in Progress?: No
Last Update by (administrator): REPORT
Last Update Date/Time: 01/30/2009 10:54:35
Storage Pool Data Format: Native
Copy Storage Pool(s):
Active Data Pool(s):
Continue Copy on Error?: Yes
CRC Data: No
Reclamation Type: Threshold
Overwrite Data when Deleted:

Recovery_pool details :

adsm> q stgp recovery_pool f=d
Storage Pool Name: RECOVERY_POOL
Storage Pool Type: Primary
Device Class Name: DISK
Estimated Capacity: 1,875 G
Space Trigger Util: 95.2
Pct Util: 95.2
Pct Migr: 95.2
Pct Logical: 35.4
High Mig Pct: 100
Low Mig Pct: 99
Migration Delay: 0
Migration Continue: Yes
Migration Processes: 10
Reclamation Processes:
Next Storage Pool: RECOVERY_FILE
Reclaim Storage Pool:
Maximum Size Threshold: No Limit
Access: Read/Write
Description: specific recovery files
Overflow Location:
Cache Migrated Files?: No
Collocate?:
Reclamation Threshold:
Offsite Reclamation Limit:
Maximum Scratch Volumes Allowed:
Number of Scratch Volumes Used:
Delay Period for Volume Reuse:
Migration in Progress?: No
Amount Migrated (MB): 0.00
Elapsed Migration Time (seconds): 0
Reclamation in Progress?:
Last Update by (administrator): 1068479
Last Update Date/Time: 03/02/2009 12:34:41
Storage Pool Data Format: Native
Copy Storage Pool(s):
Active Data Pool(s):
Continue Copy on Error?: Yes
CRC Data: No
Reclamation Type:
Overwrite Data when Deleted:

Could somebody shed some light as to how to resolve this problem please.

Thanks
 
OK!

Maximum Scratch Volumes Allowed: 5,000
Number of Scratch Volumes Used: 96

for each migration, the migration job add some scratch tapes to the recovery_pool (actuell 96) for the next migration it takes for example four new tapes! so you have 100 tapes used.

using 100 tapes is more est_capacity as using 96 tapes, ok?

so the est_capacity grows!
take a look at your reclamation threshold!
 
When I checked the total used space on all the existing volumes combined in the pool..it is coming around 260GB..thats a far cry from 13TB.

The file system % usage is going up when new volumes are added. What happens when the file system reaches 100%.
 
Problem is, i am unable to understand..that if i have a capacity of 13TB..available and only 260GB used, why is it still expanding and filling up the filesystem.

Even if i do a move data, it adds new volumes.

As it is a production server..i cannot afford to try things on it, like checking what happens if the file system becomes full.
 
Problem is, i am unable to understand..that if i have a capacity of 13TB..available and only 260GB used, why is it still expanding and filling up the filesystem.

file_recovery's capacity is 13TB probably because that is 5000 (maxscratch) by the max volume size (check the devclass). If you don't have that amount of space in the filesystem, it obviously can't grow as much as that. And the reason why it adds volumes is it only creates them as it needs them - it doesn't create all 13TB at the start.
 
Last edited:
file_recovery's capacity is 13TB probably because that is 5000 (maxscratch) by the max volume size (check the devclass). If you don't have that amount of space in the filesystem, it obviously can't grow as much as that. And the reason why it adds volumes is it only creates them as it needs them - it doesn't create all 13TB at the start.

Thanks for the clarification.I am a bit more clear in the head now..:)

Here is the devclass

adsm> q devclass f=d
Device Class Name: RECOVERY
Device Access Strategy: Sequential
Storage Pool Count: 1
Device Type: FILE
Format: DRIVE
Est/Max Capacity (MB): 5,120.0
Mount Limit: 20
Mount Wait (min):
Mount Retention (min):
Label Prefix:
Library:
Directory: /tsmsrv/data/FileDevc
Server Name:
Retry Period:
Retry Interval:
Shared:
High-level Address:
Minimum Capacity:
WORM: No
Drive Encryption:
Scaled Capacity:
Last Update by (administrator): REPORT
Last Update Date/Time: 01/30/2009 10:40:00
 
Last edited:
Back
Top