ADSM-L

Re: Disk volumes

2002-09-26 14:58:55
Subject: Re: Disk volumes
From: Scott Walters <scott_walters AT MACKTRUCKS DOT COM>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Thu, 26 Sep 2002 14:48:39 -0400
John,

       Overall, the biggest gain in performance we got was changing the DB to
use a raw slice instead of disk file (it even sped up the responsiveness
of the web interface).  We then tell TSM to mirror the DB with another
raw slice.  We've done this for the recovery log as well with it set to
rollforward.

       From what i read in the archives of this list and the docs *make sure*
that MirrorWrite DB and MirrorWrite LOG are set to Sequential in the
server options.

scott



Johnn D. Tan wrote:

Thanks Scott. Hmm, never saw anyone else mention this, though I've
been on the list for about a year. (Though, given the volume of this
list, I definitely could've missed some mails.)

Well, we are planning to move to new hardware and to TSM 5.1.1.x, so
I guess that's as good a time as any to try using raw volumes for
DB/log/diskpool. We definitely need any performance gains we can get
as our daily automated procedures run very late into the day.

Thanks again!

johnn



John,
       We just when through setting up our disk staging pool with TSM
5.1 on
Solaris 8.  We had 6x36G to work with.

       The initial config used large files on the filesystem.  This is
incredibly slow.

       We then configured Disk Suite to mirror the disks and we
configured TSM
to use the raw devices e.g. /dev/md/rdsk/d0.  Performance was
significantly better.  So TSM saw 3x36.  From what we saw, it seemed the
mirroring affected the performance more than the lack of spindles.  As
others have said, TSM seems to be pretty good at spreading the data
around so it minimizes contention on the spool volumes.  I never tested
more, smaller mirrors.

       We then gave TSM the raw slices /dev/rdsk/c1t0d0s0 so he saw
6x36.  This
was the fastest by far.  We were able to max the bandwidth (100M) for
over an hour (38G in one hour).

       We lose the fault tolerance of mirrored disks, but we figure
since it is
only a staging area who cares?  If a disk goes bad, we will lose the
data backed up to it, but we can always back it up again.  We felt the
performance gains are well worth the redundancy hit.  Though we have not
tested pulling a disk from the staging pool and seeing what happens.

       I don't know your environment, but I would go with a single
slice on all
disks and tell TSM to use the raw device.  If you really feel you need
the redundancy I would just create 6 mirrors and use the raw mirrored
device.

       From all of the benchmarking I've done it seems that once you
get your
setup decently tuned (don't tell TSM to use files for DB/LOG/DISKPOOLS)
that the bottlenecks are either network capacity to your TSM server, or
disk/cpu performance on the client (compression on).
But I've only tested in one environment, ymmv.


       Hope this helps.

scott


Johnn D. Tan wrote:

I have 12 36-GB drives available for spool.

Based on recommendations made to this list earlier this year, I went
with 12 mirrored disk spools of 16 GB each (keep in mind disk
overhead).

As I understood it, the issue was you want many spools so that, as
Allen mentioned, you can have many threads for backups and even
migrations (assuming you have a good number of tape drives).

However, you don't want so many spools per disk, otherwise there is
contention for head movement on the drive which would result in
poorer performance.

johnn


=> On Thu, 26 Sep 2002 08:54:01 -0400, Mahesh Tailor
<MTailor AT CARILION DOT COM> said:

 Hopefully this is a simple question: I have fourteen 36GB drives
that are
 available for the diskpool and I was wondering whether it is better
to have
 seven 5GB files or three 10GB files or one 35GB file or something
else?  The
 drives are mounted in two IBM-2014 Ultra-Wide SCSI disk drawers with
 separate Ultra-Wide contollers.  The other 14 drives are used for
DB, LOG,
 and spare.



You have a total of 28 spindles, 14 each on two busses, right?

I'd suggest making a RAID-5 out of the fourteen free spindles, and
then make
the individual volumes "A reasonable size".  What's a reasonable size?
Uh... ;)

I just did this with a drawer of 36G SSA, and I chose 10G volumes,
because I
have about a dozen (and growing) disk pools amongst which I need to
divide
things up.

Even if you only have one or two disk pools, it's useful to have more
than a
few volumes per pool, because instantaneously only on thing can write
to a
volume at a time.  So, for example, if you have 12 clients backing up,
and one
70G disk volume, there is contention for the thread controlling that
one
volume.

So calculate the size so that you'll have as many volumes as you feel
like
keeping track of, but not many more than that.


- Allen S. Rout






--
Scott Walters
Packet Pusher - "The world speaks IP"

Mack Trucks, WHQ        http://www.MackTrucks.com
2100 Mack Boulevard     Ph: 610.709.3728
Allentown, PA 18103     Fx: 610.709.2809





--
Scott Walters
Packet Pusher - "The world speaks IP"

Mack Trucks, WHQ        http://www.MackTrucks.com
2100 Mack Boulevard     Ph: 610.709.3728
Allentown, PA 18103     Fx: 610.709.2809

<Prev in Thread] Current Thread [Next in Thread>