Networker

Re: [Networker] SCSI problems -- How many drives to a bus?

2004-01-12 12:01:30
Subject: Re: [Networker] SCSI problems -- How many drives to a bus?
From: George Sinclair <George.Sinclair AT NOAA DOT GOV>
To: NETWORKER AT LISTMAIL.TEMPLE DOT EDU
Date: Mon, 12 Jan 2004 12:01:51 -0500
Yes, StorageTek replaced all the terminators claiming that numerous
customers had experienced problems with the older ones.

George Lavrov wrote:
>
> Have you tried a different SCSI terminators? According my local contact at
> STK, a while ago STK released number of terminators which had problems...
> I have sold SCSI problems (LTO drive transfer rate)  twice (on Windows) with
> STK L80 and L40 for the last 6 month simply by replacing STK  terminators
> with a regular type LVD/SE...
>
> Cheers,
> Gueorgui (George) Lavrov, MCSE
> glavrov AT mail DOT com
>
> -----Original Message-----
> From: George Sinclair [mailto:George.Sinclair AT NOAA DOT GOV]
> Sent: Friday, January 09, 2004 9:36 AM
> To: NETWORKER AT LISTMAIL.TEMPLE DOT EDU
> Subject: SCSI problems -- How many drives to a bus?
>
> Hi,
>
> We have a Storagetek L80 tape library with 4 LTO drives. We've been seeing a
> lot of SCSI problems on the host. Host is a storage node running RedHat
> Linux. I end up rebooting this host about once a week because the
> /etc/LGTOuscsi/inquire utility fails to see the picker device. This is
> really annoying. We finally moved the storage node to another, more powerful
> Linux box with more buses, etc. Same problem there!!! The first clue is the
> "read open error, Device or resource busy" message that appears next to the
> affected device in the devices section of the nwadmin window. Often, a
> backup will be running when the host loses communication to the picker.
>
> We have the robot on its own separate bus and all 4 drives share a bus.
> Max sessions per device is set to 5. We're running 6.1.1 under Solaris
> primary server. Should also note that we do have an ATL SDLT tape library
> running on there, too. Its picker, and two drives all share same bus, but
> this bus is its boss and does not share anything with the L80.
> So, we have three Adpactec cards: one for ATL, one for L80 picker and one
> for L80 LTO drives (dual channel Adapctec cards).
>
> I'm wondering if we have too many LTO drives on the bus? Could this cause
> these SCSI problems? Maybe better to have no more than two drives per bus?
> Someone suggested that we get the picker on its own bus which we recently
> did but that didn't fix it. I'm beginning to think that there's something
> wrong with the Storage Tek library and maybe it's time to have Storage Tek
> come look at it. Maybe we should get a temp license for another storage node
> and move the ATL over there so we only have one library on this host? Guess
> it would be easier to troubleshoot, but seems silly to have to do that. NO
> reason we should not be able to run two libraries, and the thing is is that
> the ATL libary never gives us any problems. I never see these "read open
> error ..." messages on there.
> Hmm ....
>
> Any thoughts?
>
> Thanks.
>
> George
>
> --
> Note: To sign off this list, send a "signoff networker" command via email to
> listserv AT listmail.temple DOT edu or visit the list's Web site at
> http://listmail.temple.edu/archives/networker.html where you can also view
> and post messages to the list.
> =*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=
>
> --
> Note: To sign off this list, send a "signoff networker" command via email
> to listserv AT listmail.temple DOT edu or visit the list's Web site at
> http://listmail.temple.edu/archives/networker.html where you can
> also view and post messages to the list.
> =*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=

--
Note: To sign off this list, send a "signoff networker" command via email
to listserv AT listmail.temple DOT edu or visit the list's Web site at
http://listmail.temple.edu/archives/networker.html where you can
also view and post messages to the list.
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=