ADSM-L

Re: Performance Large Files vs. Small Files

2001-03-04 21:18:09
Subject: Re: Performance Large Files vs. Small Files
From: "Mark S." <stapleto AT BERBEE DOT COM>
Date: Sun, 4 Mar 2001 19:58:10 -0600
bbullock wrote:
>         The problem that keeps me awake at night now is that we now have
> manufacturing machines wanting to use TSM for their backups. In the past
> they have used small DLT libraries locally attached to the host, but that's
> labor intensive and they want to take advantage of our "enterprise backup
> solution". A great coup for my job security and TSM, as they now see the
> benefit of TSM.
>
>         The problem with these hosts is that they generate many, many small
> files every day. Without going into any detail, each file is a test on a
> part that they may need to look at if the part ever fails. Each part gets
> many tests done to it through the manufacturing process, so many files are
> generated for each part.
>
>         How many files? Well, I have one Solaris-based host that generates
> 500,000 new files a day in a deeply nested directory structure (about 10
> levels deep with only about 5 files per directory). Before I am asked, "no,
> they are not able to change the directory of file structure on the host. It
> runs proprietary applications that can't be altered". They are currently
> keeping these files on the host for about 30 days and then deleting them.

The solution that leaps to mind right off the top of my head is multiple
TSM servers sharing a library. It would be the easiest way to split the
file open/file close bottleneck that plagues NT and Netware clients and
their myriads of small files. This solution also splits out the database
load.

Look to the TSM server-to-server redbook for help.

--
Mark Stapleton (stapleton AT berbee DOT com)
Mark Stapleton (stapleton AT berbee DOT com)
<Prev in Thread] Current Thread [Next in Thread>
  • Re: Performance Large Files vs. Small Files, Mark S. <=