Re: Scalability information sought
2006-05-02 09:47:21
Jon LaBadie wrote:
On Tue, May 02, 2006 at 10:45:35AM +0200, Alexander Jolk wrote:
On one of my two site, we have 20TB total disk capacity, of which about
16TB is in use; on two servers, 800GB nightly (two 200GB LTO-2 tapes per
server per night); 45 clients, split into about 2200 individual DLEs.
Just a FMI question Alexander,
That is about 50 DLE's per client.
Are you doing that on a per-user basis or something similar?
Are they separate file systems, separate directory trees,
or are you doing the old include/exclude thing?
These 50 DLEs are individual directory trees. When one particular
directory gets too big, I split it off into several individual
subdirectories, and the root excluding the subdirs. I wrote a small
perl script that helps me in splitting up, giving disklist stanzas as
follows:
# edge3:/vol/SEQS
edge3 /vol/SEQS/BANK comp-user-tar 1
edge3 /vol/SEQS/D1 comp-user-tar 1
edge3 /vol/SEQS/F comp-user-tar 1
edge3 /vol/SEQS/F5 comp-user-tar 1
edge3 /vol/SEQS {
comp-work-tar
exclude append "./BANK"
exclude append "./D1"
exclude append "./F"
exclude append "./F5"
} 1
# end edge3:/vol/SEQS
I'm trying to keep my DLEs below 10GB for most of them, with occasional
large ones up to 70GB, on 200GB LTO-2 without hardware compression.
Alex
--
Alexander Jolk * BUF Compagnie * alexj AT buf DOT com
Tel +33-1 42 68 18 28 * Fax +33-1 42 68 18 29
|
|
|