I have seen this also. There is no mistake in the numbers. The trick is that
the backup is multi-stream.
Pretend that you have really slow disks, they can only read at 2 MB/s. But your
server and network can handle much more. Also pretend you have 3 independent
sets (or maybe RAID arrays) of disks.
So what happened is at time zero, the TSM client will start 3 reads (from 3
RAID arrays, say). Each read goes at 2 MB/s. The network is only transferring
data at 2 MB/s (individual streams). But at the end, time 100s, say, you have
transferred 2*3*100 MB in 100s or 6 MB/s. Therefore network is 2 MB/s aggregate
is 6 MB/s.
Try running 'run q_proc_stats' and 'run q_ses_stats' to see individual streams.
Here is an example:
I started a backup of UNXP:
tsm: UNXR>run q_ses_stats
Session Client State Elapsed Time
Bytes sent/second Bytes received/second
----------- ------------------ ------------------ ------------------------
--------------------------------- ---------------------------------
--------------------------------- ---------------------------------
17644 SUMALINV IdleW 0 01:12:15.000000
17644 SUMALINV IdleW 0 01:12:15.000000
3.0394463667820 0.0915801614763
17655 PURDYM Run 0 00:00:42.000000
28.9523809523809 4.4523809523809
17656 UNXP IdleW 0 00:00:28.000000
1253.1785714285714 17.1071428571428
17657 UNXP MediaW 0 00:00:27.000000
12.9629629629629 13165670.6296296296296
17658 UNXP MediaW 0 00:00:20.000000
17.5000000000000 16305390.3000000000000
17659 UNXP MediaW 0 00:00:12.000000
29.1666666666666 21583245.1666666666666
ANR1462I RUN: Command script Q_SES_STATS completed successfully.
Notice the UNXP client lines.
I couple of minutes later, I see this:
tsm: UNXR>run q_ses_stats
Session Client State Elapsed Time
Bytes sent/second Bytes received/second
----------- ------------------ ------------------ ------------------------
--------------------------------- ---------------------------------
--------------------------------- ---------------------------------
17644 SUMALINV IdleW 0 01:16:25.000000
17644 SUMALINV IdleW 0 01:16:25.000000
2.8737186477644 0.0865866957470
17655 PURDYM Run 0 00:04:52.000000
27.1712328767123 1.5068493150684
17656 UNXP IdleW 0 00:04:38.000000
126.2482014388489 1.7517985611510
17657 UNXP Run 0 00:04:37.000000
1.3537906137184 8231446.2635379061371
17658 UNXP RecvW 0 00:04:30.000000
1.3703703703703 7437697.6740740740740
17659 UNXP RecvW 0 00:04:22.000000
1.4885496183206 12208151.3816793893129
ANR1462I RUN: Command script Q_SES_STATS completed successfully.
Notice the last three lines in the report. 17656 is the control session, the
other three are transferring data.
And still later:
tsm: UNXR>run q_ses_stats
Session Client State Elapsed Time
Bytes sent/second Bytes received/second
----------- ------------------ ------------------ ------------------------
--------------------------------- ---------------------------------
--------------------------------- ---------------------------------
17644 SUMALINV IdleW 0 01:25:32.000000
17644 SUMALINV IdleW 0 01:25:32.000000
2.5674201091192 0.0773577552611
17655 PURDYM Run 0 00:13:59.000000
17.0774731823599 0.8009535160905
17656 UNXP IdleW 0 00:13:45.000000
42.5636363636363 0.6836363636363
17657 UNXP RecvW 0 00:13:44.000000
0.5825242718446 12140523.4817961165048
17658 UNXP RecvW 0 00:13:37.000000
0.5813953488372 11900639.0097919216646
17659 UNXP RecvW 0 00:13:29.000000
0.6118665018541 13460839.5228677379480
ANR1462I RUN: Command script Q_SES_STATS completed successfully.
A quick 'iostat 6' on UNXP, confirms the performance:
tty: tin tout avg-cpu: % user % sys % idle % iowait
0.0 801.3 38.7 20.8 22.9 17.6
Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk17 89.3 7038.0 147.8 42240 0
hdisk12 88.0 6547.5 141.5 39296 0
hdisk18 78.8 7006.1 109.3 42048 0
hdisk13 77.5 6536.9 102.0 39232 0
hdisk14 91.1 6938.1 151.1 41640 0
hdisk16 77.5 6963.4 109.5 41792 0
And finally:
Total number of objects inspected: 148
Total number of objects backed up: 49
Total number of objects updated: 0
Total number of objects rebound: 0
Total number of objects deleted: 0
Total number of objects expired: 25
Total number of objects failed: 0
Total number of bytes transferred: 31.96 GB
Data transfer time: 721.50 sec
Network data transfer rate: 46 451.19 KB/sec
Aggregate data transfer rate: 36 256.13 KB/sec
Objects compressed by: 0%
Elapsed processing time: 00:15:24
Well I got this far and it didn't really work, but I'm sending this anyway! You
get the idea.
But here is what you ask about:
05/31/02 03:22:28 ANE4952I (Session: 17554, Node: UNXR) Total number of
objects inspected: 108,705
05/31/02 03:22:28 ANE4954I (Session: 17554, Node: UNXR) Total number of
objects backed up: 665
05/31/02 03:22:28 ANE4958I (Session: 17554, Node: UNXR) Total number of
objects updated: 0
05/31/02 03:22:28 ANE4960I (Session: 17554, Node: UNXR) Total number of
objects rebound: 0
05/31/02 03:22:28 ANE4957I (Session: 17554, Node: UNXR) Total number of
objects deleted: 0
05/31/02 03:22:28 ANE4970I (Session: 17554, Node: UNXR) Total number of
objects expired: 557
05/31/02 03:22:28 ANE4959I (Session: 17554, Node: UNXR) Total number of
objects failed: 0
05/31/02 03:22:28 ANE4961I (Session: 17554, Node: UNXR) Total number of
bytes transferred: 56.32 GB
05/31/02 03:22:28 ANE4963I (Session: 17554, Node: UNXR) Data transfer
time: 3,172.12 sec
05/31/02 03:22:28 ANE4966I (Session: 17554, Node: UNXR) Network data
transfer rate: 18,617.20 KB/sec
05/31/02 03:22:28 ANE4967I (Session: 17554, Node: UNXR) Aggregate data
transfer rate: 26,310.05 KB/sec
05/31/02 03:22:28 ANE4968I (Session: 17554, Node: UNXR) Objects
compressed by: 0%
05/31/02 03:22:28 ANE4964I (Session: 17554, Node: UNXR) Elapsed
processing time: 00:37:24
Miles
----------------------------------------------------------------------------------------------
-------------------
Miles Purdy
Miles Purdy
System Manager
Farm Income Programs Directorate
Winnipeg, MB, CA
purdym AT fipd.gc DOT ca
ph: (204) 984-1602 fax: (204) 983-7557
"If you hold a UNIX shell up to your ear, can you hear the C?"
-------------------------------------------------------------------------------------------------
----------------------
>>> puimym AT HK1.IBM DOT COM 31-May-02 2:57:26 AM >>>
>>> puimym AT HK1.IBM DOT COM 31-May-02 2:57:26 AM >>>
Dear TSMers,
As I understand (pls correct me if i'm wrong), network data transer rate =
total no. of bytes transferred / data transfer time &
Aggregate data transfer rate = total no. of bytes transferred / elapsed
processing time,
where elapsed processing time include data transfer time and other misc
processing time.
So the Network data transfer rate should be greater than aggregate data
transfer rate. However, I got a customer which has the aggregate data
transfer rate greater than network data transfer rate. Does anyone have any
idea?
I just want to check if there's something wrong and where's the bottleneck
for the performance.
The environment is like this (LAN-free backup of files):
TSM server (AIX 5L) ver 4.2.1.11
Storage Agent ver 4.2.1.11
TSM Client (AIX 5L) ver 4.2.1.15
05/28/02 23:42:40 ANS1898I ***** Processed 1,500 files *****
05/28/02 23:42:40 --- SCHEDULEREC STATUS BEGIN
05/28/02 23:42:40 Total number of objects inspected: 1,890
05/28/02 23:42:40 Total number of objects backed up: 1,890
05/28/02 23:42:40 Total number of objects updated: 0
05/28/02 23:42:40 Total number of objects rebound: 0
05/28/02 23:42:40 Total number of objects deleted: 0
05/28/02 23:42:40 Total number of objects expired: 0
05/28/02 23:42:40 Total number of objects failed: 0
05/28/02 23:42:40 Total number of bytes transferred: 33.29 GB
05/28/02 23:42:40 Data transfer time: 16,002.26 sec
05/28/02 23:42:40 Network data transfer rate: 2,181.87 KB/sec
05/28/02 23:42:40 Aggregate data transfer rate: 5,695.58 KB/sec
05/28/02 23:42:40 Objects compressed by: 0%
05/28/02 23:42:40 Elapsed processing time: 01:42:10
05/28/02 23:42:40 --- SCHEDULEREC STATUS END
05/28/02 23:42:40 --- SCHEDULEREC OBJECT END NOTES13BK 05/28/02
22:00:00
Many thanks.
Best Regards,
Molly
|