ADSM-L

Re: Performance Large Files vs. Small Files

2001-02-21 06:36:26
Subject: Re: Performance Large Files vs. Small Files
From: Suad Musovich <suad AT CCU1.AUCKLAND.AC DOT NZ>
Date: Thu, 22 Feb 2001 00:36:51 +1300
On Tue, Feb 20, 2001 at 03:21:34PM -0700, bbullock wrote:
...
>         How many files? Well, I have one Solaris-based host that generates
> 500,000 new files a day in a deeply nested directory structure (about 10
> levels deep with only about 5 files per directory). Before I am asked, "no,
> they are not able to change the directory of file structure on the host. It
> runs proprietary applications that can't be altered". They are currently
> keeping these files on the host for about 30 days and then deleting them.
>
>         I have no problem moving the files to TSM on a nightly basis, we
> have a nice big network pipe and the files are small. The problem is with
> the TSM database growth, and the number of files per filesystem (stored in
> TSM). Unfortunately, the directories are not shown when you do a 'q occ' on
> a node, so there is actually a "hidden" number of database entries that are
> taking up space in my TSM database that are not readily apparent when
> looking at the output of "q node".

Why not put a TSM server on the Solaris box and back it up to one of the other
servers as a virtual volume.
It would redistribute the database to the Solaris host and the data is kept
as a large object on the tape-attached TSM server.

I also remember reading about grouping files together as a single object. I 
can't
remember if it did selective groups of files or just whole filesystems.

Cheers, Suad
--
=========================================================================