Hi,
I have the following HSM FS, which migrates data from 20TB GPFS to lto5 tapes:
tsm: tsm-srv-a>q filesp hsm-node-a /gpfs/hsm-1
Node Name: hsm-node-a
Filespace Name: /gpfs/hsm-1
FSID: 14
Platform: Linux x86-64
Filespace Type: GPFS
Is Filespace Unicode?: No
Capacity: 20,479 GB
Pct Util: 48.9
tsm: tsm-srv-a>q occ hsm-node-a /gpfs/hsm-1
Node Name: hsm-node-a
Type: Bkup
Filespace Name: /gpfs/hsm-1
FSID: 14
Storage Pool Name: BCK-1
Number of Files: 3,183,589
Physical Space Occupied (MB): 21,560,257.28
Logical Space Occupied (MB): 21,545,729.28
Node Name: hsm-node-a
Type: Bkup
Filespace Name: /gpfs/hsm-1
FSID: 14
Storage Pool Name: BCK-1-COPY
Number of Files: 3,183,587
Physical Space Occupied (MB): 21,556,129.28
Logical Space Occupied (MB): 21,539,937.28
Node Name: hsm-node-a
Type: SpMg
Filespace Name: /gpfs/hsm-1
FSID: 14
Storage Pool Name: HSM-1
Number of Files: 1,293,109
Physical Space Occupied (MB): 12,474,465.28
Logical Space Occupied (MB): 12,474,465.28
Node Name: hsm-node-a
Type: SpMg
Filespace Name: /gpfs/hsm-1
FSID: 14
Storage Pool Name: HSM-1-COPY
Number of Files: 1,293,109
Physical Space Occupied (MB): 12,474,465.28
Logical Space Occupied (MB): 12,474,465.28
Due to a failure of disk array I'm restoring (dsmc restore) contents of this FS to a different disk array.
New FS is *not* HSM-managed - it's just a regular filesystem on another linux box.
I have already restored about 80TB of files and restore is still going.
This value does not correspond with above output of "q occ" command - Physical/Logical Space Occupied is much smaller that 80TB.
Can you comment on this?
Thanks in advance!
I have the following HSM FS, which migrates data from 20TB GPFS to lto5 tapes:
tsm: tsm-srv-a>q filesp hsm-node-a /gpfs/hsm-1
Node Name: hsm-node-a
Filespace Name: /gpfs/hsm-1
FSID: 14
Platform: Linux x86-64
Filespace Type: GPFS
Is Filespace Unicode?: No
Capacity: 20,479 GB
Pct Util: 48.9
tsm: tsm-srv-a>q occ hsm-node-a /gpfs/hsm-1
Node Name: hsm-node-a
Type: Bkup
Filespace Name: /gpfs/hsm-1
FSID: 14
Storage Pool Name: BCK-1
Number of Files: 3,183,589
Physical Space Occupied (MB): 21,560,257.28
Logical Space Occupied (MB): 21,545,729.28
Node Name: hsm-node-a
Type: Bkup
Filespace Name: /gpfs/hsm-1
FSID: 14
Storage Pool Name: BCK-1-COPY
Number of Files: 3,183,587
Physical Space Occupied (MB): 21,556,129.28
Logical Space Occupied (MB): 21,539,937.28
Node Name: hsm-node-a
Type: SpMg
Filespace Name: /gpfs/hsm-1
FSID: 14
Storage Pool Name: HSM-1
Number of Files: 1,293,109
Physical Space Occupied (MB): 12,474,465.28
Logical Space Occupied (MB): 12,474,465.28
Node Name: hsm-node-a
Type: SpMg
Filespace Name: /gpfs/hsm-1
FSID: 14
Storage Pool Name: HSM-1-COPY
Number of Files: 1,293,109
Physical Space Occupied (MB): 12,474,465.28
Logical Space Occupied (MB): 12,474,465.28
Due to a failure of disk array I'm restoring (dsmc restore) contents of this FS to a different disk array.
New FS is *not* HSM-managed - it's just a regular filesystem on another linux box.
I have already restored about 80TB of files and restore is still going.
This value does not correspond with above output of "q occ" command - Physical/Logical Space Occupied is much smaller that 80TB.
Can you comment on this?
Thanks in advance!