storage pool reconfig, another story.

TeamTLR

ADSM.ORG Member
Joined
Feb 1, 2017
Messages
24
Reaction score
0
Points
0
I would like some opinions. I implemented active stgpools using a 6Tb SAN LUN. The storage hierarchy is

pstgp -> v7 san luns ( FILE dev class ) called v7db2cyb
its active stgp is -> a v7 san lun ( FILE dev class ) called v7db2active
the pstgp's next stgp is a cyber vtl with 6 drives called cybvtl1
v7db2cyb auto-copy mode is client.

the cybver vtl stgp is a primary stgp (cybvtl1)
next is tapepool or lto5, TL ts3200. This is not used anymore.
its copypool is LTO5 same TL. Copypool is used currently.
its autocopy mode is client
the cyber vtl replicates itself offsite to another cybervtl.

the TL has 4 LTO5 drives.

With limited tape-drives, my goal is to have TSM write its backups to SAN while writing to VTL using active data pool. I want to keep the ADP on SAN because it lives on the TSM server itself, though this is not a must. Then during the day migrate from SAN to LTO5. As of last week it was migrated every morning from TSM FS to the cyberVTL and it would call a LTO5 copypool tape.

So do I create another pool as a copy pool using the same cyber vtl (cybdev) dividing the drives in half to retain the migration process off the Pstgpool v7db2cyb, yet allowing simultaneous writes to a new VTL known as cybvtl2-copy during client backups? Second thought I guess I'd have to change the dev class of the cyber vtl to be shared.

Or do I recreate the cybvtl Pstgp as a copy stgpool as a shared device? I have enough space on the primary stgp (v7db2cyb) to last 3.7 weeks of daily backups, 40Tb.

IBM Best practice states "create a minimum of two storage pools: one active-data pool, which contains only active data, and one copy storage pool, which contains both active and inactive data. You can use the active-data pool volumes to restore critical client node data, and afterward you can restore the primary storage pools from the copy storage pool volumes. Active-data pools must not be considered for recovery of a primary pool or volume unless the loss of inactive data is acceptable."

I inherited this env and want to make sure its the best with what I have available.


tsm: R2TSM01> q stg

Storage Device Storage Estimated Pct Pct High Low Next Stora-
Pool Name Class Name Type Capacity Util Migr Mig Mig ge Pool
Pct Pct
----------- ---------- --------- ---------- ----- ----- ---- --- -----------
AUTODEPLOY AUTODEPLOY DEVCLASS 0.0 M 0.0 0.0 90 70
COPYPOOL LTO5 DEVCLASS 649,422 G 10.3
CYBVTL1 CYBDEV DEVCLASS 567,427 G 10.7 17.1 90 70 TAPEPOOL
HPVTLAIX HPVTLAIX_- DEVCLASS 464,173 G 2.9 5.0 90 70
DVCL
HPVTLDB2LGS HPVTLDB2L- DEVCLASS 0.0 M 0.0 0.0 90 70
GS_DVCL
HPVTLLNX HPVTLLNX_- DEVCLASS 23,251 G 0.3 1.5 90 70
DVCL
HPVTLWIN HPVTLWIN_- DEVCLASS 23,045 G 40.1 43.9 90 70
DVCL
TAPEPOOL LTO5 DEVCLASS 179,714 G 13.2 5.1 90 70
V7DB2ACTIVE V7DISK2VT- DEVCLASS 6,090 G 11.4
L-2
V7DB2CYB V7DISK2VTL DEVCLASS 39,912 G 9.0 9.0 90 25 CYBVTL1
WINV72TAPE WINMYFILES DEVCLASS 0.0 M 0.0 0.0 90 70 TAPEPOOL

tsm: R2TSM01>q stg V7DB2CYB f=d

Storage Pool Name: V7DB2CYB
Storage Pool Type: Primary
Device Class Name: V7DISK2VTL
Storage Type: DEVCLASS
Cloud Type:
Cloud URL:
Cloud Identity:
Cloud Location:
Estimated Capacity: 40,168 G
Space Trigger Util: 13.2
Pct Util: 7.7
Pct Migr: 7.7
Pct Logical: 100.0
High Mig Pct: 90
Low Mig Pct: 25
Migration Delay: 1
Migration Continue: Yes
Migration Processes: 3
Reclamation Processes: 2
Next Storage Pool: CYBVTL1
Reclaim Storage Pool:
Maximum Size Threshold: No Limit
Access: Read/Write
Description:
Overflow Location:
Cache Migrated Files?:
Collocate?: Node
Reclamation Threshold: 60
Offsite Reclamation Limit:
Maximum Scratch Volumes Allowed: 2,048
Number of Scratch Volumes Used: 52
Delay Period for Volume Reuse: 0 Day(s)
Migration in Progress?: Yes
Amount Migrated (MB): 1,825,231.64
Elapsed Migration Time (seconds): 24,123
Reclamation in Progress?: No
Last Update by (administrator): DEREKSMITH
Last Update Date/Time: 09/15/2017 09:40:25
Storage Pool Data Format: Native
Copy Storage Pool(s):
Active Data Pool(s): V7DB2ACTIVE
Continue Copy on Error?: Yes
CRC Data: No
Reclamation Type: Threshold
Overwrite Data when Deleted:
Deduplicate Data?: No
Additional space for protected data:

Auto-copy Mode: Client
Contains Data Deduplicated by Client?: No



Storage Pool Name: CYBVTL1
Storage Pool Type: Primary
Device Class Name: CYBDEV
Storage Type: DEVCLASS
Cloud Type:
Cloud URL:
Cloud Identity:
Cloud Location:
Estimated Capacity: 567,985 G
Space Trigger Util:
Pct Util: 10.7
Pct Migr: 17.1
Pct Logical: 100.0
High Mig Pct: 90
Low Mig Pct: 70
Migration Delay: 1
Migration Continue: Yes
Migration Processes: 3
Reclamation Processes: 2
Next Storage Pool: TAPEPOOL
Reclaim Storage Pool:
Maximum Size Threshold: No Limit
Access: Read/Write
Description: Primary stg pool for the Cybernetics VTL
Overflow Location:
Cache Migrated Files?:
Collocate?: Node
Reclamation Threshold: 60
Offsite Reclamation Limit:
Maximum Scratch Volumes Allowed: 2,048
Number of Scratch Volumes Used: 351
Delay Period for Volume Reuse: 0 Day(s)
Migration in Progress?: No
Amount Migrated (MB): 0.00
Elapsed Migration Time (seconds): 0
Reclamation in Progress?: No
Last Update by (administrator): DEREKSMITH
Last Update Date/Time: 10/02/2017 15:05:49
Storage Pool Data Format: Native
Copy Storage Pool(s): COPYPOOL
Active Data Pool(s):
Continue Copy on Error?: Yes
CRC Data: No
Reclamation Type: Threshold
Overwrite Data when Deleted:
Deduplicate Data?: No
Processes For Identifying Duplicates:
Compressed:
Additional space for protected data:
Total Unused Pending Space:
Deduplication Savings:
Compression Savings:
Total Space Saved:
Auto-copy Mode: Client
Contains Data Deduplicated by Client?: No
 
If I were to make this environment sound, I would do this:

- use the Tape library as the copy pool (as is right now)
- use the SAN disk to cache incoming backup while simultaneously writing to the VTL and Tape library for the copy pool.

Simple and straightforward.
 
Back
Top