We have a single domain with a 10.5tb disk pool. This diskpool fills up every night (we cannot allocate more disk at this time). To mitigate this, we have migrations running pretty frequently to lower the data within the pool. However, the system is behaving oddly. The diskpool is configured with 6 migration processes and a high mig pct of 75.
I have 6 LTO6 drives, all attached via 8GB fiber in a IBM ts3500 library.
This same behavior occurred when we were using LTO5 media.
Diskpool will be at some level of more than 50% full. A manual migration is triggered, which spawns the 6 migrations from disk to tape.
Despite having a pct migr of over 50% (or over 5tb), 2 of the jobs end almost right away with a closure code of "Success" after moving about 200-300MB of data.
Within 30 minutes, the system has Completed "successfully" on all but 1 migration job. I'll have 40-50% of the diskpool still migrate-able.
Canceling the job and restarting the migration spawns 6 jobs again, which follow the same pattern all over again.
What kind of logic is TSM using in determining it only needs a single migration stream when there's that much data? How can I work around this? I'm filling my diskpool every night and failing over to direct-to-tape backups as well as just flat out having backup failures because even an auto-triggered migration follows the same pattern. Starts all 6 migrations, and within 30 minutes, there's maybe 2 processes but usually just 1.
I have 6 LTO6 drives, all attached via 8GB fiber in a IBM ts3500 library.
This same behavior occurred when we were using LTO5 media.
Diskpool will be at some level of more than 50% full. A manual migration is triggered, which spawns the 6 migrations from disk to tape.
Despite having a pct migr of over 50% (or over 5tb), 2 of the jobs end almost right away with a closure code of "Success" after moving about 200-300MB of data.
Within 30 minutes, the system has Completed "successfully" on all but 1 migration job. I'll have 40-50% of the diskpool still migrate-able.
Canceling the job and restarting the migration spawns 6 jobs again, which follow the same pattern all over again.
What kind of logic is TSM using in determining it only needs a single migration stream when there's that much data? How can I work around this? I'm filling my diskpool every night and failing over to direct-to-tape backups as well as just flat out having backup failures because even an auto-triggered migration follows the same pattern. Starts all 6 migrations, and within 30 minutes, there's maybe 2 processes but usually just 1.