>Here's our statistics for one of the NT IBM 7000 servers:
>
>objects inspected: 2,607,398
>objects backed up: 7,118
>objects updated: 3
>objects rebound: 0
>objects deleted: 2,157
>objects failed: 269
>bytes transferred: 806.08 MB
>Data transfer time: 40.68 sec
>Network data transfer rate: 20,286.74 KB/sec
>Aggregate data transfer rate: 22.55 KB/sec
>Objects compressed by: 47%
>Elapsed processing time: 10:09:55
>
>Took 10 hours to back up 800MB! Maybe 1.5 GB an hour is not so bad...
Dmitri - Too often customers look only at the rate numbers and based upon
those alone are dismayed at their backup timings. In your case,
it's really the case that it took 10 hours to:
- get a list of some 2.6 million files from the server
- massage and sort that in client memory
- traverse the file system seeking files needing to be backed up, per list
comparision
- having to repeatedly retry files that are unavailable at the moment, or
that change over the backup operation (whose duration is aggravated with
the use of compression, thus making for a greater exposure to the file
possibly being changed during the backup operation)
- read, compress and send backup data to the server, as well as notify it of
files gone since the last backup.
The number of files backed up was 0.27% of the file system complement, which
suggests that the time was dominated by traversing the client file system and
compressing files. One should periodically perform a client trace to see a
breakdown of where the time is actually going, to tune or rearrange.
Obviously, many factors affect backup speed:
- The CPU power of the client (in your case, a big factor in compression)
- Amount of real memory (vital for efficiently handling that large file table
gotten from the server)
- Contention with other processes in the client system
- Disk speed, needed to get through the file system asap
- File system topology, where overpopulation of directories will slow down
any access to such areas
- Raw network speed, combined with traffic contention, relative buffer sizes,
and hardware compatibility
- Client option choices
- ADSM transaction size values and byte limits, in client and server
- Availability of server resources (tape drives, space in disk storage
pools...which might induce delays in migrating to make space)
- Server database buffer tuning
- Server contention
Taking periodic in-depth looks at what's going on is important in
administering client-server systems, and becomes a priority when long windows
are being experienced. It's also very satisfying to feel that you know what's
going on, and have some control over it - not to mention having answers for
management when they inevitably ask about the long run times.
Richard Sims, BU
|