ADSM-L

Re: Performance problems with large number of files

1999-05-10 18:54:50
Subject: Re: Performance problems with large number of files
From: "Thomas A. La Porte" <tlaporte AT ANIM.DREAMWORKS DOT COM>
Date: Mon, 10 May 1999 15:54:50 -0700
Chuck,

I would suggest looking at an instrumentation client detail trace
(-traceflags=instr_client_detail ) for an incremental backup of
the filesystems. This will give you a good indication of where
the ADSM client is spending the bulk of its time. The following
is an example of the trace information you'll receive:

Final Detailed Instrumentation statistics

Elapsed time:    26.908 sec

Section      Total Time(sec)  Average Time(msec)  Frequency used

------------------------------------------------------------------
Client Setup        0.679          678.9              1
Client Setup        0.679          678.9              1
Process Dirs       11.204           20.9            537
Solve Tree          0.000            0.0              0
Compute             0.013            0.0            520
Transaction         1.370            0.6           2206
BeginTxn Verb       0.000            0.0              1
File I/O           10.968           17.5            627
Compression         0.000            0.0              0
Data Verb           1.459            2.8            520
Confirm Verb        0.058           58.4              1
EndTxn Verb         1.129         1129.3              1
Client Cleanup      0.027           27.0              1

------------------------------------------------------------------
During troubleshooting of a V2 problem a while back, these traces
During troubleshooting of a V2 problem a while back, these traces
were key in showing that our disk subsystems were our primary
bottleneck.

 -- Tom

Thomas A. La Porte
DreamWorks Feature Animation
tlaporte AT anim.dreamworks DOT com

On Mon, 10 May 1999, Chuck Mattern wrote:

>I have an AIX server (ADSM client) that has some large filesystems (~80 gigs)
>and a large number of files (~3 million).  The machine has 512 megs of RAM, 1.5
>megs of swap and, until the number of files topped 1 million per filesystem, no
>performance problems.  File average 40k in size.  ADSM is used for backup and
>for HSM.  Currently backup of one of these big filesystems can take as much as
>60 hours.  We are forced to use memoryefficientbackup lest dsmc fail with an 
>out
>of memory error.  The machine never runs out of memory so it must be the dsmc
>client.  We have tried setting the priority to the max with nice and have tried
>setting ulimit for memory and data to unlimited.  Still must use
>memoryefficientbackup and still need hideous amount of time to backup.
>
>Has anyone dealt with a situation like this in the past (hopefully with
>success)?
>
>Thanks in advance,
>Chuck
>
<Prev in Thread] Current Thread [Next in Thread>