TOC sizing no info

KennyMartin

Newcomer
Joined
May 15, 2009
Messages
4
Reaction score
0
Points
0
Hey all, I've been tasked with setting up NDMP with TSM server 5.5.1.1, I'm not a dedicated TSM administrator as I'm mainly a system Admin come jack of all trades, so be gentle.

I have been somewhat confused about Table of Contents (TOC) sizing, there is no information on any specific formulas for TOC sizing in any of the TSM documentation and also read various accounts on TOC sizes on the web.

One account is where Admin says he has backed up 2 Terrabytes of data and as a TOC of 50Gbyte and another admin saying they have 1 Terrabyte and a TOC size of 5Gbyte.

So to all you gurus that have experience on TOC, what are the rules on estimating the size of your TOC.

Also I'm told by those above me that the Data that will be backedup via NDMP TSM will have to be retained for 7years for legal reasons, is NDMP backups with TOC a suitable method.

All comments welcome good and bad.

Aware that TSM 6.1 currently doesn't support TOC.
 
Since the TOC is, well a table of contents on the volume, it's size is dependant on number of files/dirs on that volume not the volume's size. So yes, you could have a 2TB volume with millions of files that produces a 50GB TOC, or a 1TB volume with thousands of files producing a 5GB TOC. Your TOC will be larger on full backups then it will on differentials because the diffs won't backup all of the files.

When setting up ours, it was basically a trial and error type situation. We ran test backups and watched to see how much impact the TOCs had on TSM DB. We ended up needing to allocate more space to the TSM DB to handle the large TOCs we were generating.

I realize you're not primarily a TSM Admin, but there are some points on TSM DB that come to play here. There are 3 buckets to look at with TSM DB - Total Available space, Total Assigned space, and then percentage of that assigned space used. TSM stores the TOC temporarily in unused table space in the DB, so it's important to have space Available, but not Assigned. So for example, I have the following output from a q db command:

Available Assigned Maximum Maximum Page Total Used Pct Max.
Space Capacity Extension Reduction Size Usable Pages Util Pct
(MB) (MB) (MB) (MB) (bytes) Pages Util
--------- -------- --------- --------- ------- --------- --------- ----- -----
54,108 11,008 43,100 10,672 4,096 2,818,048 88,421 3.1 4.0

So I have 54GB available, 11GB assigned. That means I have 43GB unused space that the TOCs will be temporarily stored in until the TOC load retention option's value expires. It then dumps to disk stg pool then migrates to tape.
 
I currently backup 15-20 TB of NAS and I used 200 MB volumes for my TOC. Never had an issue. If too big you can not reclaim those fast enought sometimes so a waste of space.
 
thanks peeps, the file server that I'm backing up will create a rather large TOC due to the large volume of individual files, the admins of these servers want to keep these files for 7 years, using a TOC is not suitable for this as we would end up with a 100- 140Gigbyte TOC.

There is two things here, those of you that have a large TOC say 20gig and above how long does it take you to restore a file.

The other thing is that I am told that IBM do not support a Large TOC size, it is a bit grey on what a Large TOC size is, does anybody know the limit they will not support
 
I can't comment on large TOC not being supported, haven't run into anything where that's come up. As for restore time, it can take us up to an hour to load the TOC on a Winders web client when doing point in time restores.
 
Back
Top