Hi Henrik,
unfortunately I don't have any specific recommendations for you situation, but
consider looking at some of these:
(our ADSM server is a silver node, 4-way, 2 GB memory)
The following are segments from an article that I'm writing, and hoping to get
published, if the information is bad or good please let me know!
1. example from my TSM (ADSM) server:
The biggest benefit I found is tuning vmtune. It is found in
/usr/samples/kernel/vmtune (if you have installed bos.adt.samples). The
parameters that I have changed are:
-r 'r' is the minimum number of pages that need to be read
sequentially before the VMM detects a sequential read. I have this set to 2.
-R to find to right setting for R, I used /usr/bin/filemon. First
I set R to an arbitrary large value, such as 128.
Then I started a filemon session and copied a
large sequential file to /dev/null. Next look at the filemon report for that LV
contains the filesystem:
example: from ADSM server node
VOLUME: /dev/dbdumps_lv description: /home/dbdumps
reads: 3356 (0 errs)
read sizes (blks): avg 194.3 min 8 max 256
sdev 90.6
read times (msec): avg 35.693 min 1.096 max 488.857
sdev 39.371
read sequences: 2835
read seq. lengths: avg 230.0 min 8 max 512
sdev 68.5
Notice that the min read blocks is 8 and the max is 256.
Remember a block is 512 bytes. So the max is actually 128k or 32 x 4k blocks.
Which means the maximum number of read ahead
blocks that will happen is 32. This number will be dependent on your adapter
and your disks.
-f 'f' is the lower limit of free pages. When the free list hits
this number the VMM starts to free memory pages. IBM recommends that this
number be:
120 x the number of CPU's you have. I would try
different combinations to see what gives you the best performance. Most of my
boxes have this set at 256.
-F 'F' is the upper limit of free pages. When the VMM is freeing
pages and hits this many free it stops. 'F' should be at least: f + R x 4096).
This guarantees that there will be enough free
memory for a large disk read. I follow this for all of my boxes.
-h Setting 'h' to '1' (one) will enforce maxperm. I had to set
this on my production nodes because they where using too much memory (8-10 GB)
for file caching, managing this amount of file
cache was causing the HACMP daemons to time out, causing HACMP to think the
node had failed.
WARNING: DO NOT CHANGE THESE PARAMETERS ON THE FLY - especially
on a production machine, it may have disastrous results.
I recommend changing them in /etc/rc.local.
2. Queue depth
If you are using SSA disks and especially if you are using a RAID array check
the queue depth.
EX:
root@unxd:/>lsattr -El hdisk5
pvid 00011396f62aa5420000000000000000 Physical volume
identifier False
queue_depth 112 Queue depth
True
write_queue_mod 1 Write queue depth
modifier True
adapter_a ssa0 Adapter connection
False
adapter_b none Adapter connection
False
primary_adapter adapter_a Primary adapter
True
reserve_lock yes RESERVE device on open
True
connwhere_shad 139639D79D1F4CE SSA Connection
Location False
max_coalesce 0x20000 Maximum coalesced
operation True
size_in_mb 255117 Size in Megabytes
False
location Location Label
True
If the last column is 'True' then the value can be changed, if it is
'False', then the information is for display only.
One parameter that I always check, especially with RAID arrays, is the
queue_depth.
This specifies the number of commands that can be outstanding
against the logical disk. It is very important with RAID arrays to make sure
that this number is:
the queue depth of an individual disk x the number of disks in the
array.
In this example it is 8 x 14 or 112. It is not 16 disks because
the equivalent of one disk is used for parity and one hot spare.
3. Are you using LVM striping?
4. Are you using the fast write cache (FWC)?
5. Is your TSM database on the same disks as your disk storage pool? You may
want to separate them if they are.
6. other recommendations:
-put the busiest disks in the centre of the loop, to give the less active disks
a chance to transfer information
disks a chance to transfer information
-when using RAID 1 or RAID 10, make sure that half the disks are closest to one
port and the other half of the disks are closest to the other port
to one port and the other half of the disks are closest to the other port
-try not to put more than 32 disks per loop
-use the fast write cache, especially for RAID 5 array's
-use the fast write cache, especially for RAID 5 array's
Our setup:
Our setup:
I have 16 SSA disks.
- 2 disks are configured as one RAID 1 array, in positions 1 and 16 in the
I/O drawer - equal distance from the adapter. This holds the ADSM database and
log.
- 6 disks hold my disk pool (RAID 0)
- 8 disks hold a filesystem that is used to back Sybase databases. It is NFS
mounted to all the other nodes.
Let me know if I can help.
Miles
-------------------------------------------------------------------------------------------
----------------
Miles Purdy
Miles Purdy
System Manager
Farm Income Programs Directorate
Winnipeg, MB, CA
purdym AT fipd.gc DOT ca
ph: (204) 984-1602 fax: (204) 983-7557
-------------------------------------------------------------------------------------------
----------------
>>> Henrik.Ursin AT UNI-C DOT DK 08-May-01 3:51:19 AM >>>
>>> Henrik.Ursin AT UNI-C DOT DK 08-May-01 3:51:19 AM >>>
I need some advice on performance issues regarding tsm 4.1.2.0 running on
a aix 4.3.3 box.
I've been trying to optimize the network throughput, but without real
luck. Then I tried backing up the tsm server through lo0 and I ended up
with the following result
Total number of objects inspected: 49,295
Total number of objects backed up: 49,295
Total number of objects updated: 0
Total number of objects rebound: 0
Total number of objects deleted: 0
Total number of objects expired: 0
Total number of objects failed: 0
Total number of bytes transferred: 2.45 GB
Data transfer time: 546.68 sec
Network data transfer rate: 4,706.32 KB/sec
Aggregate data transfer rate: 2,041.05 KB/sec
Objects compressed by: 0%
Elapsed processing time: 00:03:00
which is not a fantastic throughput.
I'm backing up to a diskpool of ssa-disks (no raid) and i have a 30GB
database which has never been defragmented.
Is it a memory tuning issue?
Any suggestions are welcome.
Does anyone know of a new tivoli performance tuning guide? I have a (not
up to date) SP performance tuning guide.
Med venlig hilsen / Regards
Henrik Ursin Tlf./Phone +45 35878934
Fax +45 35878990
Email henrik.ursin AT uni-c DOT dk
Mail: UNI-C
DTU, bygning 304
DK-2800 Lyngby
|