Still have to scrub the space due to requirments. I can inform people all day long but i still need a procedure and accidents happen.
You are looking for this:
https://www.ibm.com/support/knowled...com.ibm.itsm.srv.doc/c_mngdata_shredding.html
Note, that shredding applies only to data in storage pools that have been explicitly configured to support shredding. And only random-access disk pool can be set for shredding.
Container pools do not support shredding, but there's not really a need given the way the data is stored.
- files are broken in chunks
- chunks are stored
- duplicate chunks are referenced, not stored again
So for a given file, TSM keeps track of which chunks are needed to make up that file. Once you delete the backup of that file, TSM deletes the references to those chunks. If other files reference the same chunks, these chunks are not deleted, but it's still not possible to get the file that was deleted, because it's impossible to know which one of the billion chunks on the server used to make up that file.
So your options are to continue to use a directory container pool if you are satisfied that deduplicated data cannot be reconstructed after the file and references are deleted. Or for sensitive data, setup a disk storage pool with shredding.