Hi DazRaz,
I really appreciate your analysis, thank you.
But, mea culpa, I didn't give you some missing and important information, I'm sorry.
I will try to explain why I need to expire the backup storage pool B and what kind of problem I'm experiencing.
Introduction:
In the past, on our HPC cluster, we created 3 GPFS file systems:
/users/home, the users home fs;
/work, the scratch fs;
/archive, the HSM managed fs.
/users/home file system is protected by incremental backup and this backup is based on the STANDARD management class that writes data in backup storage poool A, so:
STANDARD MGMT CLASS (DEFAULT) -> Backup Policy A (2 versions, 1 version data deleted, 30 days extra version, 60 days only version) -> backup storage pool A
/archive file system is an HSM managed file system, but, before migrate the files, we are used to backup them. In TSM client dsm.sys config file there is the following instruction
include.backup /archive/.../* archivefs_bck
This backup was based on the ARCHIVEFS_BCK management class that writes data in backup storage pool B, so:
CUSTOM MGMT CLASS -> Backup Policy B (2 version data exists, 01version data deleted, 30 days extra version, 60 days only version) -> backup storage pool B
Anyway, in order to reduce the number of tapes used by this backup, I deleted a lot of (not needed) files from /archive fs and I changed the Backup Policy B as follows:
CUSTOM MGMT CLASS -> Backup Policy B (1 version data exists, 0 version data deleted, 0 days extra version, 0 days only version) -> backup storage pool B
Then I executed "expire inventory" but nothing is happening.
I think that this issue is due to the ARCHIVEFS_BCK that is not the default one.
From the "q act" output, it seems that expire inventory doesn't check the backup storage pool related to ARCHIVEFS_BCK mgmt class.
I could try to set this mgmt class as default one, launch expire inventory and set again the old mgmt as default.
Can I do that? Or do you think that this policy could "expire" the backup storage pool A data also?