ADSM-L

Re: backup performance with db on the Shark ESS

2002-09-18 15:55:19
Subject: Re: backup performance with db on the Shark ESS
From: Joshua Bassi <jbassi AT IHWY DOT COM>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Wed, 18 Sep 2002 12:50:10 -0700
Eliza,

How many ranks (Logical Subsystems) is your DB and log spread out
across?  For highest performance, try spreading the load across as many
spindles as possible.

--
Joshua S. Bassi
IBM Certified - AIX 4/5L, SAN, Shark
Tivoli Certified Consultant - ADSM/TSM
eServer Systems Expert -pSeries HACMP

AIX, HACMP, Storage, TSM Consultant
Cell (831) 595-3962
jbassi AT ihwy DOT com


-----Original Message-----
From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On Behalf Of
Eliza Lau
Sent: Wednesday, September 18, 2002 9:28 AM
To: ADSM-L AT VM.MARIST DOT EDU
Subject: backup performance with db on the Shark ESS

I posted a message a month ago about performance degradation after
moving
the db and the log to the Shark ESS.  The problem has been resolved with
help from Paul Seay.  Here is a recap:

Moved 32G db and 10G log from attached SCSI non-RAID disks to IBM Shark
ESS.
Kept 2 copies of db and log.  I wasn't ready to drop the security
blanket.
Backup db increased from 40 minutes to 90 minutes.
Reformat the AIX volume group that the db was on from 2 LUNs to 8 LUNs.
Backup db still ran for 90 minutes.  The additional LUNs didn't help at
all.
Dropped the second copy of the db and log.
Backup db now runs for 24 minutes.

Paul told me that there is a fix in 4.2.2.8 for performance of backing
up
2 copies of the db.  I will try setting up a second copy of the db after
I
upgrade to 5.1 in a few months and see if it makes a difference.
Meanwhile, I am keeping only 1 copy of the db and 2 copies of the log
and depending solely on RAID-5 of the ESS for the integrity of my db.


server: TSM 4.2.1.15 on AIX 4.3.3

Thanks to all the people who contributed,

Eliza Lau
Virginia Tech Computing Center
1700 Pratt Drive
Blacksburg, VA 24060
lau AT vt DOT edu

<Prev in Thread] Current Thread [Next in Thread>