ADSM-L

Re: Recovery log

2003-02-18 17:35:31
Subject: Re: Recovery log
From: PAC Brion Arnaud <Arnaud.Brion AT PANALPINA DOT COM>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Tue, 18 Feb 2003 18:06:39 +0100
Hi Mark,

You are probably suffering from so called "log pinning" symptom. To make
it short : log is working on a circular basis, and can only be freed if
all transactions that where written into it are commited to TSM DB. If
not done, you reach a point where the log bites it's own tail : older
record are not freed, therefore making it impossible to create new ones,
and the server dies. This is generally due to very long transactions, as
for example huge or slow archives (probably your case).
The antidote : extend your log, or modify Trougputdatathreshold and
trougputtimethreshold options value in dsmserv.opt, to kill
automatically the sessions having too low troughput during too much
time. Another one would be switchin to "normal" logmode, if currently
using rollforward, but this has it's own bad sides ... 
Anyway, do a search on topic "pinned log" in this list, and you'll find
lots of responses to your question !
Hope it helped ...

Arnaud

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
| Arnaud Brion, Panalpina Management Ltd., IT Group     |
| Viaduktstrasse 42, P.O. Box, 4002 Basel - Switzerland |
| Phone: +41 61 226 19 78 / Fax: +41 61 226 17 01       | 
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=



-----Original Message-----
From: Mark Hayden [mailto:MHayden AT EPA.STATE.IL DOT US] 
Sent: Tuesday, 18 February, 2003 16:06
To: ADSM-L AT VM.MARIST DOT EDU
Subject: Recovery log


HI all, We have had some issues with our recovery log filling up during
archives. Our Db is growing at an a alarming speed , about 82 % of a 41
Gb db. Our log is only a gig in size..Is this to small???? This seams to
happen during remote archives across the Wan...Could this play a role?

Thanks, Mark Hayden
Informations Systems Analyst
E-Mail:  MHayden AT epa.state.il DOT us


 

<Prev in Thread] Current Thread [Next in Thread>