ADSM-L

Re: Interesting LTO fault's symptom

2003-06-16 11:18:54
Subject: Re: Interesting LTO fault's symptom
From: "Lambelet,Rene,VEVEY,GL-CSC" <Rene.Lambelet AT NESTLE DOT COM>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Mon, 16 Jun 2003 15:13:03 +0200
Hi,

we have exactly the same problem, about 2 -3 tapes / day with write error,
2/3 of capacity. Only on Imation tapes, not on IBM ones...

We spend our time to do MOVE DATA !

                René LAMBELET
                NESTEC  SA
                GLOBE - Global Business Excellence
                Central Support Center
                Information Technology
                Av. Nestlé 55  CH-1800 Vevey (Switzerland) 
                tél +41 (0)21 924 35 43   fax +41 (0)21 703 30 17   local
K4-404
                mailto:rene.lambelet AT nestle DOT com


-----Original Message-----
From: Tomáš Hrouda Ing. [mailto:throuda AT HTD DOT CZ]
Sent: Monday,16. June 2003 14:14
To: ADSM-L AT VM.MARIST DOT EDU
Subject: Interesting LTO fault's symptom


Hi all,

during last weeks I did interesting findings at one of production LTO
3583-L18 library. There are 2 drives and both was changed past series of
media faults (one of them twice) past about 1 year of operation. We have
about 20 "historicaly touched" tapes with average 3-4 write faults. Media
faults are still repeated and my finding is all that faults were done at
70-75% of estimated capacity (set by longterm using to 105GB, we use client
compression) during filling the tape. It seems like all tapes were corrupted
nearly at the same place, of course there is some diffusion because this is
only estimated filling. Faults at these tapes are repeatedly occured at
these percents of max capacity.

I understand when one tape has media fault repeatedly at the same place, but
about 20 tapes? Could it mean that all tapes were corrupted by one bad drive
at the same place, or can be reason at microcode?

We are in contact with our IBM support to solve it, but I am interested if
anybody of you register similar phenomenon?

Tomas

<Prev in Thread] Current Thread [Next in Thread>