ADSM-L

Re: Metrics for a "large" filesystem?

2005-04-20 21:30:12
Subject: Re: Metrics for a "large" filesystem?
From: TSM_User <tsm_user AT YAHOO DOT COM>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Wed, 20 Apr 2005 18:29:43 -0700
Have you looked at journaling with the V5.3 client?

asr AT UFL DOT EDU wrote:==> On Mon, 18 Apr 2005 16:56:22 -0400, Thomas Denier 
said:

> I tend to think of anything over three million files as big, but that is
> based on experience with just two cases. We have a client with a bit over
> three million files that was chronically troublesome until it went through a
> major hardware upgrade. We have a client with over nine million files that
> remains chronically troublesome.

Eugh, agreed.


> You mention the pain of watching the log scroll. The 'quiet' option, which
> eliminates most log output, can result in a significant performance
> improvement for clients with large numbers of files.

Well, it's not really taking much time writing the logs. For example, as I
type this it's working on its' 7th hour of running, having processed 5.6M
files I've got 160K lines in the logfile, avg ~7 loglines a second. Probably
not adding to my performance problems. :) It'll probably complete processing
20M files sometime late tomorrow morning.

By painful log scrolling, I mean that on sane-sized filesystems I'm accustomed
to seeing a few thousand files handled every second. Instead, I'm seeing a
few seconds to handle another 500. Yuck.

I'm probably going to split it up: my architecture would make a 'split this
filesystem up 10 ways' very neat and orderly, and I'd go from 4x5M filespaces,
to 40x 500K filespaces, which I think would be much nicer.


- Allen S. Rout


__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around
http://mail.yahoo.com