ADSM-L

Re: Tivoli DB limit

2006-04-21 08:38:52
Subject: Re: Tivoli DB limit
From: Richard Sims <rbs AT BU DOT EDU>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Fri, 21 Apr 2006 08:37:38 -0400
On Apr 20, 2006, at 5:28 PM, Gaurav Marwaha wrote:

        We have a huge file system with about 30 to 50 million files to be
backed up, the incremental backup does the job, but takes too long to
complete, out of these 50 million files, only 20000 or so actually
change. So
the scanning runs sometimes into 24 hour window and the next
scheduled backup
starts without actually completing the previous one.

Gaurav -

You may be new to TSM and not realize that this question has been
pursued many times in this forum. You can best review past postings
at www.mail-archive.com/adsm-l AT vm.marist DOT edu/ .

This is formally known as the "Many small files" problem. See that
entry in http://people.bu.edu/rbs/ADSM.QuickFacts or http://
www.tsmwiki.com/tsmwiki for collected information.

Someone tells me that the Tivoli DB can take only 100million
objects for
tracking and filist might not be a correct way to do it. He says
there is DB2
lite running behind TSM and that has this limit?

Your information source is faulty. I'd advise more authoritative
references, as found on the IBM Web site and in documented referenced
in mailing list postings.

It is more typically the case that the client is "limited" in running
out of the amount of memory needed to accommodate the Active files
list it gets from the server at the start of Incremental backup
processing. It behooves a client which serves a highly elevated data
complement like this to run a 64-bit version of the operating system
for that platform, to best deal with the volumes of metadata that it
needs to be able to handle. This provides the expanse needed for
holding such arrays in memory.

 Richard Sims

<Prev in Thread] Current Thread [Next in Thread>