ADSM-L

Re: Tivoli DB limit

2006-04-21 03:37:38
Subject: Re: Tivoli DB limit
From: Josh-Daniel Davis <xaminmo AT OMNITECH DOT NET>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Fri, 21 Apr 2006 02:28:51 -0500
I've never heard of a 100million file limitation to TSM.

The limits are 13.5GB for the log and 512GB for the TSM DB.

It's not DB2 Lite, rather, more like a port of the 1980s version of DB2
that was part of MVS.

Dave Cannon in a 2003 TSM symposium said they were considering decoupling
the database and using a more current DB2 implementation.  Aparently
they're still "considering" it, but it's unk whether they'll actually do
this or not.  There are technical limitations in modern DB2, specifically
lack of bitvector data type or equiv.

Even so, the DB I'm working with is 105 million files at about 150GB.  The
limit here is CPU and I/O of the box to be able to process that many
objects in daily admin jobs.

For your server, you could look at:
Image Backup
Journalling (if it's Windows)
VirtualMountpoint option (if UNIX)
MEMORYEFFICIENT YES and also RESOURCEUTIL 10.

These last two will tell it to process one dir at a time, but split out
into 5 producers and 5 consumers.  This will get things moving faster in
the beginning, and give it alot of oomph, but at 50mil files on one
client, you're still looking at some serious time for any backup solution.

Another option would be to use an incremental by date.  This is much
faster, as it just compares the mod date of the file to the last backup.
The drawback is that deleted files won't be expired, and management class
rebinding won't occur.  You could still use this daily, then maybe every
10 days use a regular incremental.

-Josh


On 06.04.20 at 16:28 gaurav.marwaha AT EMAGEON DOT COM wrote:

Date: Thu, 20 Apr 2006 16:28:20 -0500
From: Gaurav Marwaha <gaurav.marwaha AT EMAGEON DOT COM>
Reply-To: "ADSM: Dist Stor Manager" <ADSM-L AT VM.MARIST DOT EDU>
To: ADSM-L AT VM.MARIST DOT EDU
Subject: Tivoli DB limit

Hi,


Problem:
        We have a huge file system with about 30 to 50 million files to be
backed up, the incremental backup does the job, but takes too long to
complete, out of these 50 million files, only 20000 or so actually change. So
the scanning runs sometimes into 24 hour window and the next scheduled backup
starts without actually completing the previous one.

I found filelist parameter where you can specify what to backup, we can use
this as we know from the database what files were changed.

Someone tells me that the Tivoli DB can take only 100million objects for
tracking and filist might not be a correct way to do it. He says there is DB2
lite running behind TSM and that has this limit?

In this scenario what is the best approach and is there is a limit at all?
Even in normal incremental operation how does TSM scan the include directory
list, I mean even when it runs normal incremental doesn't that 100 million
limit still exist?

Thank you in advance
Gaurav M


<Prev in Thread] Current Thread [Next in Thread>