• Please help support our sponsors by considering their products and services.
    Our sponsors enable us to serve you with this high-speed Internet connection and fast webservers you are currently using at ADSM.ORG.
    They support this free flow of information and knowledge exchange service at no cost to you.

    Please welcome our latest sponsor Tectrade . We can show our appreciation by learning more about Tectrade Solutions
  • Community Tip: Please Give Thanks to Those Sharing Their Knowledge.

    If you receive helpful answer on this forum, please show thanks to the poster by clicking "LIKE" link for the answer that you found helpful.


    Click the link above to access ADSM.ORG Acceptable Use Policy and forum rules which should be observed when using this website. Violators may be banned from this website. This notice will disappear after you have made at least 3 posts.

Moving from Tape to Disk Only - what stgpools to use?


Sep 5, 2006
Reaction score
Visit site
TSM v8.1.4
Running on Windows

i6000 StorageTek TL using LTO6 tapes

Moving to disk only (no tape).
Disk will be SAN connected and look and feel the same as the disk pools from that point of view.

Question - what stgpool type to use for a one stop disk only solution?
I was thinking Directory-Container Storage Pools - but not sure how to protect the data.
We have access to two different disk arrays - from two different data Centres. I will need to replicate the data.
But - all the doco I am reading says to replicate to another server. Problem - we only have the 1 x TSM server
and no time to setup another in the short term (in the time required to start replacing the TL with the disk solution).

So - is Directory-Container pools best for a disk only based data repository?
If so - Is there a way to migrate or copy data from one directory-container pool to another using just the one server (like a primary and secondary copy the way that tape works)?

I am currently reading.......
https://www.ibm.com/developerworks/...ctory-container storage pools FAQs?section=q7

But was interested in how others have set-up their environments and your experiences with it.



ADSM.ORG Moderator
Jun 16, 2006
Reaction score
Hi Sharon,

If you are going to container pools, I strongly recommend that you look up the Blueprint (https://www.ibm.com/support/pages/ibm-spectrum-protect-blueprints ) to make sure your environment is sized and configured properly. Need enough CPU and memory to handle the inline deduplication, and also need a properly configured database otherwise it will just be slow and unpleasant. So the DB has to be carved with sufficient number of LUNs and adequate disks. The same goes for the storage pools tool.

From your existing environment, figure out if you fall in a Small, Medium or Large blueprint based on daily ingest, and then look at the specs in the Blueprint to make sure you are setup properly:
select date(start_time) as DATE,sum(summary.bytes)/1024/1024/1024/1024 as TOP_INGEST from summary where ( activity='BACKUP' or activity='ARCHIVE' ) group by date(start_time) order by TOP_INGEST desc fetch first 25 rows only
Now to address your questions.

There's 2 ways to protect a container pool. The first one is via protect/replicate and like you have read, you would need a 2nd server with your second storage array.

The second is only really practical with small environments and it's to protect the container pool locally to a tape pool. That's not practical for large environments because in the event that you lose the storage pool, you need to recover the entire storage pool from tape before you can start client restore from the pool. That's because the tape copy is deduplicated and not hydrated, so you cannot do client restores from a copy container pool tye same way you did with a traditional copy pool.

You can tier data from a directory container pool to another pool, but tiering doesn't create an additional copy, it moves the data. So that doesn't help you either.

If you want to go disk-only with a Spectrum Protect solution, it's best to use the 2 server, 2 site solution. That gives you the scenario because you have a server ready to do restores in the primary goes down.


ADSM.ORG Senior Member
Mar 15, 2017
Reaction score
I will agree with everything marclant posted. One of my goals is to get to a mostly disk based solution, but tape will still play a part due to extended retention and data that doesn't perform any sort of reduction. Also, with the tier to tape for extended retention has me excited! No more dedicated archives!

That said, reality is often different. I'm (still) using tape as a copy of the directory container pool since it was offered. I am outside of the recommended specs for the amount of data on tape. Budgets being what they are, no new disk storage for a rapidly growing infrastructure. I had the tough choice of cutting retention in half, or moving to a directory-container pool based system with tape copy. So the directory container was chosen. Which also allowed me to meet the recommend retention periods set by higher ups. I thankfully never had to restore the container pool. It is a thought that I dread however. I am living on borrowed time and have been for several years (that is what keeps me up at night by the way!). Don't be like me :)

If you can make the time, stand up a 2nd server with that storage and follow the blueprint. You will be well rewarded. If you cannot, and your total storage is under 40T (Going from memory here on size, could be wrong) using tape to get an offsite copy for the directory container pool is feasible, just expect 72+ hours before you are able to facilitate client restores (assuming your LTO drives are running fast).

Another option:
And this may not be a great idea, but why not use a filepool or diskpool for primary and copy, and have the copy be a filepool on the remote storage. Send your db backup/volhist/devconfig to that storage and you'd be protected. You won't get any of the benefits of the container storage pools. And not sure if you could or would want to run any sort of data reduction on the copy pool however. Fig 2 of this page shows just that: https://www.ibm.com/support/knowledgecenter/en/SSEQVQ_8.1.2/srv.solutions/c_stg_pools.html
I've not done the above file to file except as a 'what if' when I had some downtime on a really really small box. The down side to that is if you do lose your main site (server / storage is smoking rubble) you will need to attache a new server, zone, install/restore db, etc etc. I would recommend driving each storage array with its own dedicated HBA's and have as few ISL's between the sites as you can.

If you are still dead set on the container-storage pools due to the fact you may achieve amazing data reduction ratios. A dedicated server as the replicate is the way to go. I'd also recommend you run a more recent code level like There's been a lot of improvements and pain points worked out from the lower levels.

Hope it helps and mods, if I pointed sharon here down a wrong path, please smack me!

Advertise at ADSM.ORG

If you are reading this, so are your potential customer. Advertise at ADSM.ORG right now.

UpCloud high performance VPS at $5/month

Get started with $25 in credits on Cloud Servers. You must use link below to receive the credit. Use the promo to get upto 5 month of FREE Linux VPS.

The Spectrum Protect TLA (Three-Letter Acronym): ISP or something else?

  • Every product needs a TLA, Let's call it ISP (IBM Spectrum Protect).

    Votes: 18 18.4%
  • Keep using TSM for Spectrum Protect.

    Votes: 60 61.2%
  • Let's be formal and just say Spectrum Protect

    Votes: 12 12.2%
  • Other (please comement)

    Votes: 8 8.2%

Forum statistics

Latest member