ADSM-L

Interpreting Backup Stats

1995-10-23 10:32:12
Subject: Interpreting Backup Stats
From: Scott McCambly <scottad AT UNOPSYS DOT COM>
Date: Mon, 23 Oct 1995 10:32:12 EDT
Hi everyone,

I've been trying to do some capacity planning for a medium size ADSM
implementation (all AIX nodes) based on all available sources of
information at my disposal.  My current problem is that what the IBM
sizing tool (OS/2 utility) is telling me seems to contradict results I'm
getting from an actual production ADSM environment consisting of 5 RS/6000's
with various sizes of workloads.  So I figure either it's wrong or I
don't fully understand the statistics reports from ADSM (which would you
guess ;-).

My main interest is in the area of how much utilization I can expect on
an FDDI ring during the backup processing.  The sizing tool indicates
that my configuration (basically only 12 AIX servers with medium to
large workloads, all on FDDI) will only cause a utilization of about 6%.

I would like to begin by asking if anyone can explain, in detail, what
the times and rates imply in the following sample of output I obtained
from a small test on one of my servers:

Total number of objects inspected:   71,256
Total number of objects backed up:       23
Total number of objects updated:          0
Total number of objects rebound:          0
Total number of objects deleted:          0
Total number of objects failed:           0
Total number of bytes transferred:     17.5 MB
Data transfer time:                    1.76 sec
Data transfer rate:                10,209.08 KB/sec
Average file size:                  2,565.6 KB
Compression percent reduction:        69.51%
Elapsed processing time:            0:04:51

This of course is not typical of what my backups will look like on the
larger servers, some of which have a number of database files over 1GB
each, however if I can interpret one report, I should be able to interpret
them all.  Some questions of clarification might be: Is the bytes transferred
calculated before or after compression?  Is all elapsed processing time
attributed to the client scanning the file system, or could the server
count for some of the delay (ie: the other 4:49)?  Since 10,000 KB/sec
is about the maximum throughput possible on FDDI, can I assume that we
maxed-out the network for at least 1.76 seconds over the total 4
minutes, 51 seconds?

Please everyone feel free to pose your own questions, and post your own
experiences and comments, since I think we might all benefit from a
brief discussion on this, however if someone at IBM could provide a
definitive statement on these or other available stats, I would very
much appreciate it.

Thanks,
--
Scott McCambly                                             scottm AT unopsys 
DOT com
Scott McCambly                                             scottm AT unopsys 
DOT com
AIX / UNIX Specialist
UNOPSYS Inc.                                                    (613)238-5620
Ottawa, Ontario, Canada                                     Fax:(613)230-3802
<Prev in Thread] Current Thread [Next in Thread>