ADSM-L

Re: [ADSM-L] Disk layout for AIX

2015-07-15 14:34:00
Subject: Re: [ADSM-L] Disk layout for AIX
From: "Rhodes, Richard L." <rrhodes AT FIRSTENERGYCORP DOT COM>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Wed, 15 Jul 2015 15:20:29 +0000
What seems like not that long ago we ran our databases on JBOD disks (disks if 
250m to 1gb).  We setup oracle per design - logs on one disk, db files on 
another, redo logs on another, archive logs on others (aix mirrored pairs).  
Then indexes would pound it's disk, but other disk sat there not being hit 
hard, so we added another disk for the indexes.  We kept at this until we 
finally realized all we were really doing was setting ceilings on the 
operations.  We would have a couple dozen disks with some getting hit hard 
while other doing little.  That's when we first started stripping across the 
spindles.  The only place we segregate on I/O type is random vs sequential.  
When we moved onto EMC Symmetric storage, we continued this - stripping across 
all luns which were across all the backend disks in the Symm. This created a 
plaid! EMC told us to NOT do this - it was a mistake, even though they spouted 
that hot spindles were the number 1 performance problem.   Then they would 
produce a heat map and see our backend spindles were all evenly used - no hot 
spots.  Several years later they were standing up at EMC World spouting wide 
stripping.  Now it seems most disk systems wide stripe by default.  Our view is 
an I/O is an I/O, unless it's of a different nature (random vs sequential).  So 
our default practice is lots of luns in few vg's with LV's set to MAXIMUN to 
spread PP's across all luns in the vg.  This is especially true as multiple 
hosts share the same disk systems and vmware has multiple clients sharing disk 
systems (I/O blender they call it).

The resources in AIX are per LV/filesystem, FCS adapter and HDISK/LUN.  There 
are no I/O resources per VG that we are aware of.  We're getting ready to move 
our big SAP system from AIX/VXVM to AIX/LVM.  We will probably have multiple 
vg's, but it will be by I/O type.  We will use stripped logical vols for the 
Oracle redo log filesystems, and, separate vg's for the RMAN area (sequential 
I/O) and some other stuff.  But in general, if's it's random I/O at disk system 
then we use very few VG's with LV's spread across the luns.




Rick




-----Original Message-----
From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On Behalf Of 
David Bronder
Sent: Wednesday, July 15, 2015 10:24 AM
To: ADSM-L AT VM.MARIST DOT EDU
Subject: Re: Re: Disk layout for AIX

Same number of LUNs in fewer VGs, or fewer LUNs, too?  The queuing concern
makes it sound like fewer LUNs.  If so, as Rick says, that's also where your
I/O concurrency would suffer.

If it's the same number of LUNs, using one VG per LUN makes it easier to
isolate each LV to each PV rather than potentially mixing them all together,
possibly having multiple LVs share the same PV, which could result in
database I/O to different DB2 volumes hitting the same PV and causing extra
contention.  (To be fair, in modern SAN arrays, the consumer host has no idea
where any of the blocks really live in the back end, so that kind of
contention can and probably often does happen anyway.)

My gut reaction with DB2 on TSM 6 and up is that nothing with the database
layout is as easily fixed as it was with the embedded database in TSM 5 and
earlier...  :-/  There's always trade-offs...

=Dave


On 07/15/2015 09:15 AM, Rhodes, Richard L. wrote:
> The concurrency I'm aware of is at the hdisk/lun level.  There's a 
> num_cmd_elems at on the fcs adapter which we set to 2k, and then queue_depth 
> on the hdisk.  That's why spreading I/O across as many hdisks/luns as 
> possible is advantageous.
>
> Rick
>
>
>
> -----Original Message-----
> From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On Behalf 
> Of Huebner, Andy
> Sent: Wednesday, July 15, 2015 9:33 AM
> To: ADSM-L AT VM.MARIST DOT EDU
> Subject: Re: Disk layout for AIX
>
> He claims there is a queuing issue with too many at the HBA.  I guess I 
> missed that the last 12 years of being a storage/TSM admin.
> I told him the theory of using many was to allow more concurrency.  Before we 
> build I just want make sure this is not a mistake that cannot be easily fixed.
>
> Thanks,
>
> Andy Huebner
>
> -----Original Message-----
> From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On Behalf 
> Of David Ehresman
> Sent: Wednesday, July 15, 2015 8:12 AM
> To: ADSM-L AT VM.MARIST DOT EDU
> Subject: Re: [ADSM-L] Disk layout for AIX
>
> VGs are cheap.  Why does your AIX admin want to reduce the number of VGs?
>
> David Ehresman
>
> -----Original Message-----
> From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On Behalf 
> Of Huebner, Andy
> Sent: Wednesday, July 15, 2015 9:03 AM
> To: ADSM-L AT VM.MARIST DOT EDU
> Subject: [ADSM-L] Disk layout for AIX
>
> I have an AIX admin that wants to build my new TSM server using two VGs for 
> the database (6 file systems) and one VG for the various logs.
> We currently have 6 VGs for the DB and 3 VGs for the logs.  Each VG contains 
> 1 file system.
>
> The DB is about 375GB and the new hardware is a P8.
>
> No TSM de-dup.
>
> Should I be concerned about the DB setup?
>
>
> Andy Huebner
>

--
Hello World.                                David Bronder - Systems Architect
Segmentation Fault                                      ITS-EI, Univ. of Iowa
Core dumped, disk trashed, quota filled, soda warm.   david-bronder AT uiowa 
DOT edu


-----------------------------------------
The information contained in this message is intended only for the personal and 
confidential use of the recipient(s) named above. If the reader of this message 
is not the intended recipient or an agent responsible for delivering it to the 
intended recipient, you are hereby notified that you have received this 
document in error and that any review, dissemination, distribution, or copying 
of this message is strictly prohibited. If you have received this communication 
in error, please notify us immediately, and delete the original message.