ADSM-L

Re: [ADSM-L] Lousy performance on new 6.2.1.1 server with SAN/FILEDEVCLASS storage

2010-10-21 12:37:28
Subject: Re: [ADSM-L] Lousy performance on new 6.2.1.1 server with SAN/FILEDEVCLASS storage
From: Zoltan Forray/AC/VCU <zforray AT VCU DOT EDU>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Thu, 21 Oct 2010 12:36:18 -0400
Correct.  This machine has 8-internal 600GB 15K drives.  The OS and DB are 
on one pair of mirrored drives.  The log  and archlog share the rest of 
the internal drives in a raid-10 (I think) array plus leaving extra space 
to DB expansion (one server I plan to migrate to these new 6.2 servers has 
a DB size of 190GB which comes to a minimum of ~600GB when converted from 
5.5).  The file devclass is SAN storage in a Claiiron box.  There are 
3-Qlogic HBA cards.  1-for disk and the other 2-for tape but only 1-in use 
due to lack of switch-ports.

We have tried to max this box out, performance wise.  48GB RAM, dual X5560 
Xeon 2.8GHz processors,  RedHat 5.5
Zoltan Forray
TSM Software & Hardware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zforray AT vcu DOT edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will 
never use email to request that you reply with your password, social 
security number or confidential personal information. For more details 
visit http://infosecurity.vcu.edu/phishing.html



From:
"Strand, Neil B." <NBStrand AT LEGGMASON DOT COM>
To:
ADSM-L AT VM.MARIST DOT EDU
Date:
10/21/2010 11:02 AM
Subject:
Re: [ADSM-L] Lousy performance on new 6.2.1.1 server with SAN/FILEDEVCLASS 
storage
Sent by:
"ADSM: Dist Stor Manager" <ADSM-L AT VM.MARIST DOT EDU>



Zoltan,
    Is your database/logs on separate disks and separate HBAs from your 
filedevclass disks and are the disk HBAs separate from tape HBAs?


Neil Strand
Storage Engineer - Legg Mason
Baltimore, MD.
(410) 580-7491
Whatever you can do or believe you can, begin it.
Boldness has genius, power and magic.


-----Original Message-----
From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On Behalf Of 
Zoltan Forray/AC/VCU
Sent: Thursday, October 21, 2010 8:41 AM
To: ADSM-L AT VM.MARIST DOT EDU
Subject: Re: [ADSM-L] Lousy performance on new 6.2.1.1 server with 
SAN/FILEDEVCLASS storage

Speaking of this book, I found the paragraph titled:  Mitigating
performance degradation when backing up or archiving to FILE volumes

Yes, I did follow their recommendations plus other recommendations for
transfer of data to high-performance (TS1130) tape drives.  Didn't see
much if any difference.

I am definitely going to see if regular pre-formatted volumes on SAN
filesystems is any better/worse.

FWIW, I have been trying to empty the existing filedevclass stgpool.
Migrating 4TB has been running for over 24-hours - still have 33% to
migrate with no user activity (still considered a somewhat test server).
Using 2-TS1130 drives at the same time.  The backups in this stgpool are
for 4-nodes. Not doing collocation.
Zoltan Forray
TSM Software & Hardware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zforray AT vcu DOT edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://infosecurity.vcu.edu/phishing.html


From:
Paul Zarnowski <psz1 AT CORNELL DOT EDU>
To:
ADSM-L AT VM.MARIST DOT EDU
Date:
10/20/2010 04:04 PM
Subject:
Re: [ADSM-L] Lousy performance on new 6.2.1.1 server with SAN/FILEDEVCLASS
storage
Sent by:
"ADSM: Dist Stor Manager" <ADSM-L AT VM.MARIST DOT EDU>



Hmm...

I thought perhaps the Performance Tuning Guide would help clarify, which
is where I thought I read this.  But it seems somewhat ambiguous.  Here
are some snippets (for AIX):

>When AIX detects sequential file reading is occurring, it can read ahead
even
>though the application has not yet requested the data.
>* Read ahead improves sequential read performance on JFS and JFS2 file
systems.
>* The recommended setting of maxpgahead is 256 for both JFS and JFS2:
>ioo .p .o maxpgahead=256 .o j2_maxPageReadAhead=256

then later on the same page:

>Tivoli Storage Manager server - Improves storage pool migration
throughput on
>JFS volumes only (does not apply to JFS2 or raw logical volumes).

and still later:

>This does not improve read performance on raw logical volumes or JFS2
>volumes on the Tivoli Storage Manager server. The server uses direct I/O
on
>JFS2 file systems.

So which is it?  Does it read ahead on jfs2 or not?  One vote for and 2
against.

On later on, there are a couple of related to using raw LV's which
mentions array-based read-ahead:

>Using raw logical volumes on UNIX systems can cut CPU consumption but
>might be slower during storage pool migrations due to lack of read-ahead.
>However, many disk subsystems have read-ahead built in, which negates
this
>concern.

Clear?  eh.  What I take away from this is if your array supports
read-ahead, make sure you've got it enabled - at least for storage pool
LUNs.  Probably doesn't make sense for DB LUNs, as it will just waste your
precious cache.

..Paul

.. thinking I might need to spend a few more nights at Holiday Inn Express
..


At 03:43 PM 10/20/2010, Remco Post wrote:
>Hmmm, that's interesting, jfs2 read-ahead. I know it exists, but recent
TSM servers by default use direct I/O on jfs2, bypassing the buffer cache,
and I assume the read-ahead as well... Or am I wrong?
>
>I noticed that on an XIV, dd can read a TSM diskpool volume at say 100
MB/s, and yes two dd processes, reading two diskpool volumes get  about
185 MB/s, not exactly twice as much, but much more than one process. The
same is true for TSM migrating to tape. So, even though you'd think that
two processes would appear more random than one, the XIV is still able to
handle them quite efficiently. Yes, this is two processes working on a
single filesystem from a single host. Now, of course, dd doesn't use
direct i/o, and TSM does, but still, there is a noticeable benefit to
running two migrations in parallel, even if both are on the same lun,
filesystem, etc. (Yes, on jfs2).
>
>On 20 okt 2010, at 21:28, Paul Zarnowski wrote:
>
>> yes, this can get complicated...  Yes, multiple threads accessing
different volumes on the same spindles can create head contention, even
with volumes formatted serially.  But I think you can still reap benefits
from laying down blocks sequentially on the filesystem.  Remco points out
read-ahead benefits, and he is (IMHO) referring to disk array-based
read-ahead.  Keep in mind that jfs[2] also has read-ahead, and it will
still try to do this regardless of whether the physical blocks are laid
down sequentially - it will just result in more head movement, more
latency, and less efficiency.  I do not believe that jfs2 read-ahead uses
array-based read-ahead.  The array-based read-ahead will pre-stage blocks
in array cache, whereas jfs2-based read-ahead will pre-stage them in jfs
mbufs.
>>
>> When the array is doing read-ahead, it will turn a single-block read
into a multi-block read.  Since the blocks are laid down in sequence,
there will be (I think) less head contention during this array-based
read-ahead.  Not the case for jfs2 read-ahead.
>>
>> not to get lost: preformatting volumes ahead of time and not letting
them get scratched and re-created on-demand will avoid filesystem
fragmentation and randomization of the blocks.  It's too bad that TSM
can't manage pre-formatted volumes as scratch volumes that can be shared
between different storage pools or even different servers (managed by the
shared library manager of course).
>>
>> ..Paul (with Holiday Inn disclaimer)
>>
>>
>> At 03:01 PM 10/20/2010, Richard Rhodes wrote:
>>> This can get complicated.
>>>
>>> File devices, as Paul states, are mostly accessed sequentially.
>>> But, as has been also said,  the actual file volumes may be fragmented
on
>>> the filesystem, resulting is effective random access.
>>> But, also, TSM may/probably will be accessing multiple file devices
>>> concurrently. This can also result is effective random access.
>>> But, also, also, if you are using a disk array you also need to take
into
>>> consideration the lun layout..  Most big disk arrays share spindles
among
>>> multiple servers (wide stripping).
>>>
>>> Unless you have a a single tsm task accessing a single file device
(that is
>>> not fragmented) on a dedicated disk, then there will contention for
I/O's.
>>>
>>> Rick
>>>
>>>
>>>
>>>
>>>
>>>
>>>            Paul Zarnowski
>>>            <psz1 AT CORNELL DOT EDU
>>>>                                                         To
>>>            Sent by: "ADSM:           ADSM-L AT VM.MARIST DOT EDU
>>>            Dist Stor cc
>>>            Manager"
>>>            <[email protected] Subject
>>>            .EDU>                     Re: Lousy performance on new
>>>                                      6.2.1.1 server with
>>>                                      SAN/FILEDEVCLASS storage
>>>            10/20/2010 02:19
>>>            PM
>>>
>>>
>>>            Please respond to
>>>            "ADSM: Dist Stor
>>>                Manager"
>>>            <[email protected]
>>>                  .EDU>
>>>
>>>
>>>
>>>
>>>
>>>
>>> I/O to devclass file volumes will be inherently sequential, yes.  It's
not
>>> an absolute, however.  There are varying degrees of "sequentialness".
>>> Think about it this way.  When you are writing these volumes, they
will
>>> definitely be purely sequential.  However, when reading them, they may
or
>>> may not be purely sequential.  If you are only restoring say "active"
>>> backup files, then TSM would be skipping over the "inactive" files
that can
>>> be interspersed between the active files.  Yes, they may have been
"active"
>>> when they were written (depending on what did the writing - client or
>>> migration), but by the time you go to read the data, depending on what
is
>>> doing the reading you may not be reading them purely sequentially.
Even if
>>> you are not reading them purely sequentially, I believe you will still
>>> likely reap benefits by having the blocks laid down on disk
sequentially.
>>> Note than when I say laid down on disk sequentially, this includes the
idea
>>> of striping the blocks across spindles (if you are doing striping).
>>> Striping does not defeat the sequentiality.
>>>
>>> ..Paul
>>>
>>> At 02:11 PM 10/20/2010, Hart, Charles A wrote:
>>>> Dumb statement, but isn't the whole Idea of the File Devclass is it
is
>>> sequential.  Can one be more sequential than the other?  If its not
then
>>> its random.
>>>>
>>>>
>>>> -----Original Message-----
>>>> From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On 
>>>> Behalf
Of
>>> Paul Zarnowski
>>>> Sent: Wednesday, October 20, 2010 1:07 PM
>>>> To: ADSM-L AT VM.MARIST DOT EDU
>>>> Subject: Re: [ADSM-L] Lousy performance on new 6.2.1.1 server with
>>> SAN/FILEDEVCLASS storage
>>>>
>>>> How you connect to the disk storage (i.e., SCSI or SAN) doesn't
matter.
>>> This goes more to the issue of how blocks within the volumes are laid
out
>>> on the spindles.  formatting them one at a time will cause the blocks
to be
>>> laid out in a more sequential fashion, so that when TSM references the
>>> blocks, they will be referenced in a more sequential fashion (assuming
you
>>> are doing mostly sequential I/O).
>>>>
>>>> ..Paul
>>>>
>>>>
>>>> At 02:02 PM 10/20/2010, Zoltan Forray/AC/VCU wrote:
>>>>> Thanks for the affirmation.  This is what I have been
>>> seeing/experiencing.
>>>>> As soon as I can empty the stgpool (5TB), I will define fixed
volumes
>>> and
>>>>> see how much difference that makes.   I am aware of the issue of
>>>>> single-threading the define/formats to not fragment them, however I
>>>>> wonder how much that really matters in a SAN?
>>>>> Zoltan Forray
>>>>> TSM Software & Hardware Administrator
>>>>> Virginia Commonwealth University
>>>>> UCC/Office of Technology Services
>>>>> zforray AT vcu DOT edu - 804-828-4807
>>>>> Don't be a phishing victim - VCU and other reputable organizations
will
>>>>> never use email to request that you reply with your password, social
>>>>> security number or confidential personal information. For more
details
>>>>> visit http://infosecurity.vcu.edu/phishing.html
>>>>>
>>>>>
>>>>>
>>>>> From:
>>>>> Markus Engelhard <markus.engelhard AT BUNDESBANK DOT DE>
>>>>> To:
>>>>> ADSM-L AT VM.MARIST DOT EDU
>>>>> Date:
>>>>> 10/20/2010 09:20 AM
>>>>> Subject:
>>>>> [ADSM-L] Lousy performance on new 6.2.1.1 server with
SAN/FILEDEVCLASS
>>>>> storage Sent by:
>>>>> "ADSM: Dist Stor Manager" <ADSM-L AT VM.MARIST DOT EDU>
>>>>>
>>>>>
>>>>>
>>>>> Hi Zoltan,
>>>>>
>>>>> my experience has been: use fixed size preformatted volumes, and be
>>>>> sure to format them sequentially, even if it seems to take a hell of
a
>>>>> time. But then, it´s a one-time action and highly automated, so just
>>>>> don´t try to boost "performance" here. Make sure no one else is
bogging
>>>>> perfs, SAN guys sometimes tend to put all kinds of unassorted loads
on
>>>>> one storage array producing massive hot-spots during TSM activities.
>>>>>
>>>>> Kind regards,
>>>>>
>>>>> Markus
>>>>>
>>>>>
>>>>> --
>>>>> Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
>>>>> Informationen. Wenn Sie nicht der richtige Adressat sind oder diese
>>>>> E-Mail irrtümlich erhalten haben, informieren Sie bitte sofort den
>>>>> Absender und vernichten Sie diese E-Mail. Das unerlaubte Kopieren
sowie
>>>>> die unbefugte Weitergabe dieser Mail oder von Teilen dieser Mail ist
>>> nicht gestattet.
>>>>>
>>>>> Wir haben alle verkehrsüblichen Maßnahmen unternommen, um das Risiko
>>>>> der Verbreitung virenbefallener Software oder E-Mails zu minimieren,
>>>>> dennoch raten wir Ihnen, Ihre eigenen Virenkontrollen auf alle
Anhänge
>>>>> an dieser Nachricht durchzuführen. Wir schließen außer für den Fall
von
>>>>> Vorsatz oder grober Fahrlässigkeit die Haftung für jeglichen Verlust
>>>>> oder Schäden durch virenbefallene Software oder E-Mails aus.
>>>>>
>>>>> Jede von der Bank versendete E-Mail ist sorgfältig erstellt worden,
>>>>> dennoch schließen wir die rechtliche Verbindlichkeit aus; sie kann
>>>>> nicht zu einer irgendwie gearteten Verpflichtung zu Lasten der Bank
>>>>> ausgelegt werden.
>>>>>
______________________________________________________________________
>>>>>
>>>>> This e-mail may contain confidential and/or privileged information.
If
>>>>> you are not the intended recipient (or have received this e-mail in
>>>>> error) please notify the sender immediately and destroy this e-mail.
>>>>> Any unauthorised copying, disclosure or distribution of  the
material
>>>>> in this e-mail or of parts hereof is strictly forbidden.
>>>>>
>>>>> We have taken precautions to minimize the risk of transmitting
software
>>>>> viruses but nevertheless advise you to carry out your own virus
checks
>>>>> on any attachment of this message. We accept no liability for loss
or
>>>>> damage caused by software viruses except in case of gross negligence
or
>>>>> willful behaviour.
>>>>>
>>>>> Any e-mail messages from the Bank are sent in good faith, but shall
not
>>>>> be binding or construed as constituting any kind of obligation on
the
>>>>> part of the Bank.
>>>>
>>>>
>>>> --
>>>> Paul Zarnowski                            Ph: 607-255-4757
>>>> Manager, Storage Services                 Fx: 607-255-8521
>>>> 719 Rhodes Hall, Ithaca, NY 14853-3801    Em: psz1 AT cornell DOT edu
>>>>
>>>> This e-mail, including attachments, may include confidential and/or
>>>> proprietary information, and may be used only by the person or entity
>>>> to which it is addressed. If the reader of this e-mail is not the
intended
>>>> recipient or his or her authorized agent, the reader is hereby
notified
>>>> that any dissemination, distribution or copying of this e-mail is
>>>> prohibited. If you have received this e-mail in error, please notify
the
>>>> sender by replying to this message and delete this e-mail
immediately.
>>>
>>>
>>> --
>>> Paul Zarnowski                            Ph: 607-255-4757
>>> Manager, Storage Services                 Fx: 607-255-8521
>>> 719 Rhodes Hall, Ithaca, NY 14853-3801    Em: psz1 AT cornell DOT edu
>>>
>>>
>>>
>>> -----------------------------------------
>>> The information contained in this message is intended only for the
personal and confidential use of the recipient(s) named above. If the
reader of this message is not the intended recipient or an agent
responsible for delivering it to the intended recipient, you are hereby
notified that you have received this document in error and that any
review, dissemination, distribution, or copying of this message is
strictly prohibited. If you have received this communication in error,
please notify us immediately, and delete the original message.
>>
>>
>> --
>> Paul Zarnowski                            Ph: 607-255-4757
>> Manager, Storage Services                 Fx: 607-255-8521
>> 719 Rhodes Hall, Ithaca, NY 14853-3801    Em: psz1 AT cornell DOT edu
>
>--
>Met vriendelijke groeten/Kind Regards,
>
>Remco Post
>r.post AT plcs DOT nl
>+31 6 248 21 622


--
Paul Zarnowski                            Ph: 607-255-4757
Manager, Storage Services                 Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801    Em: psz1 AT cornell DOT edu

IMPORTANT:  E-mail sent through the Internet is not secure. Legg Mason 
therefore recommends that you do not send any confidential or sensitive 
information to us via electronic mail, including social security numbers, 
account numbers, or personal identification numbers. Delivery, and or 
timely delivery of Internet mail is not guaranteed. Legg Mason therefore 
recommends that you do not send time sensitive 
or action-oriented messages to us via electronic mail.

This message is intended for the addressee only and may contain privileged 
or confidential information. Unless you are the intended recipient, you 
may not use, copy or disclose to anyone any information contained in this 
message. If you have received this message in error, please notify the 
author by replying to this message and then kindly delete the message. 
Thank you.