ADSM-L

Re: TSM Server 4.1.3 Performance

2001-11-05 10:24:34
Subject: Re: TSM Server 4.1.3 Performance
From: Miles Purdy <PURDYM AT FIPD.GC DOT CA>
Date: Mon, 5 Nov 2001 08:33:33 -0600
Hi Mike,

That seems very odd that small files work better than large files. First try 
reading this, I wrote it and it might help:
http://www.samag.com/documents/s=1146/sam0109j/0109j.htm  , OR,
http://www.sysadminmag.com/articles/2001/0109/0109a/0109a.htm

Try these:
-Since you have two RAID arrays, or hdisks, I would use the LVM to stripe the 
LV that your storage pool is on, over the two arrays. Make sure that the LV 
that your storage pool is on, over the two arrays. Make sure that is it not 
'spread out', but create a striped LV: mklv -S 128k, for example
Here is my LV:
root@unxr:/dev>lslv dbdumps_lv
LOGICAL VOLUME:     dbdumps_lv             VOLUME GROUP:   ADSMvg128
LV IDENTIFIER:      00008107f034d2ed.2     PERMISSION:     read/write
VG STATE:           active/complete        LV STATE:       opened/syncd
TYPE:               jfs                    WRITE VERIFY:   off
MAX LPs:            2170                   PP SIZE:        128 megabyte(s)
COPIES:             1                      SCHED POLICY:   striped
LPs:                2170                   PPs:            2170
STALE PPs:          0                      BB POLICY:      non-relocatable
INTER-POLICY:       maximum                RELOCATABLE:    no
INTRA-POLICY:       edge                   UPPER BOUND:    2
MOUNT POINT:        /home/dbdumps          LABEL:          /home/dbdumps
MIRROR WRITE CONSISTENCY: on                                     
EACH LP COPY ON A SEPARATE PV ?: yes                                    
STRIPE WIDTH:       2                                      

The two disks, or 'STRIPE WIDTH', are RAID 0 arrays.

-Make sure that the queue depth on your adapter is set properly, for each array:
array:
root@unxr:/dev>lsattr -El hdisk6
pvid            00008107347137090000000000000000 Physical volume identifier  
False
*** queue_depth     64 ***                              Queue depth             
    True
write_queue_mod 3                                Write queue depth modifier  
True
adapter_a       ssa0                             Adapter connection          
False
adapter_b       none                             Adapter connection          
False
primary_adapter adapter_a                        Primary adapter             
True
reserve_lock    yes                              RESERVE device on open      
True
connwhere_shad  81073AF833FB4CE                  SSA Connection Location     
False
max_coalesce    0x20000                          Maximum coalesced operation 
True
size_in_mb      145781                           Size in Megabytes           
False
location                                         Location Label              
True

-Try and put each array on different loop, or better a different adapter
make sure that the VMM is tuned. Check the minfree and minfree free make sure 
that the VMM is tuned. Check the minfree and minfree free settings and 
especially the read ahead, as well as others. The command is: 
/usr/samples/kernel/vmtune.
This is my command for my TSM server, with a very similar setup to yours:
/usr/samples/kernel/vmtune -f 256 -F 768 -u 32 -p 20 -P 70 -r 2 -R 64 -b 128 -l 
65536 -B 547 -w 32000

Here is a speed test:
I copied a 2 GB file to /dev/null in one window. Ran iostat 10 in another:
tty:      tin         tout   avg-cpu:  % user    % sys     % idle    % iowait
          1.6        515.1              47.5     26.0       16.7       9.8     

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk3           8.5      49.6      10.7        240       256
hdisk1           2.7      15.6       3.0         16       140
hdisk2           0.9       6.0       1.2          0        60
hdisk0           2.1      10.8       2.7          0       108
**** hdisk6          67.8     19407.8     153.5     194256        16 ****
hdisk5           0.0       0.0       0.0          0         0
hdisk4           0.0       0.4       0.1          0         4
**** hdisk7          80.3     19240.0     151.4     192588         4 ****

I have a look at the article though, it gives lots of good ideas, I hope!
Miles


-------------------------------------------------------------------------------------------
----------------
Miles Purdy 
Miles Purdy 
System Manager
Farm Income Programs Directorate
Winnipeg, MB, CA
purdym AT fipd.gc DOT ca
ph: (204) 984-1602 fax: (204) 983-7557
-------------------------------------------------------------------------------------------
----------------
>>> Michael.MA.Wiggan AT PDO.CO DOT OM 05-Nov-01 1:20:15 AM >>>
>>> Michael.MA.Wiggan AT PDO.CO DOT OM 05-Nov-01 1:20:15 AM >>>
Hi Guru's

I have recently installed a new TSM 4.1.3 server on a P600 and AIX 4.1.3. I
have 2*75Gybyte RAID5 Disk Cache attached through SSA. The performance for
reading from disk to tape is fine for small files, but I only get 2.5MB/Sec
when files are larger than say 1Gbyte. I have set "SELFTUNETXNsize   Yes",
and it brought it up tothis speed when using two drives.

I have 18 *4Gbyte files on each RAID5 and can see that the data is evenly
distributed in each file. I can see that the file say 7Gbytes is spread
throughout many of the Disk Storage Volumes. I don't want to go straight to
tape as this causes a huge headache in scheduling.

Is this a known bottleneck or is there a way around to get at least
20MB/Sec?

Kind Regards
Mike Wiggan, TCS/31
Infrastructure Integration Specialist
Petroleum Devlopment Oman LLC
(michael.ma.wiggan AT pdo.co DOT om)
<Prev in Thread] Current Thread [Next in Thread>