Re: [ADSM-L] New personal best.
Hi still Running...
Sess Comm. Sess Wait Bytes Bytes
Sess Platform Client Name
Number Method State Time Sent Recvd Type
------ ------ ------ ------ ------- -------
----- -------- --------------------
12,225 Tcp/Ip RecvW 0 S 309.1 K 10.2 T
Node LinuxPPC MN_PROJECTS_PROXY
12,525 Tcp/Ip RecvW 0 S 138.4 K 9.9 T
Node LinuxPPC MN_PROJECTS_PROXY
Roger Deschner wrote:
I tried this for a while, forcing large client files directly to tape,
but those big files are so awful to back up directly to tape that we
bought more disks. When a client session gets a tape drive for backup,
the network and the client (more often than not, some underpowered Mac
that some dork has filled with video files) can almost never pump enough
data down the wire to keep the tape drive streaming. So it stops and
starts - shoeshining. This really wears out your tapes. They say that
DLT and LTO tapes are good for some huge number of passes in the
millions, but once you start shoeshining you can reach those numbers
To add insult to injury, when a slow client is backing up a huge file to
a tape drive, your TSM Server Log is pinned. We had a log-full server
crash this way.
The only way to positively prevent shoeshining and resulting full log
crashes is to buy more disks, and insure that all tape writing is done
via TSM Migration from local server disk. Set the file size limits the
same in the primary (disk) and secondary (tape) Storage Pools so that
client backup sessions never mount a tape. The cost of a little more
disk space is really worth it here, in terms of both overall
performance, and tape wear.
Another way to reduce shoeshining is to use a reclamation storage pool.
During reclamation, the output tape has to stop while the input tape is
skipping past the already-expired files.
In your situation, set RESOURCEUTULIZATION to 10 in the client options
file so that it uses 5 sessions to back up at once. This can really help
with a lot of small files like you have. That old rule of thumb that you
should have enough Storage Pool disk for one normal day of backups still
applies, especially when backing up a monster like this. Then you should
migrate to tape using as many migration processes as you have tape
Roger Deschner University of Illinois at Chicago rogerd AT uic DOT edu
= "Things are more like they are now than they have ever been before." =
======================== --Dwight D. Eisenhower ========================
On Thu, 12 Apr 2007, Skylar Thompson wrote:
I have tried virtualmountpoint without chance (or without
experience)... I meant that I have one TSM for one FS (8.000.000->
today, 16.000.000 of files-> future // 10 versions) This FS is
represented by one node... the first stage is disk... so when I
migrate to tape, I only have one process... :'( :'(
And creating more than one node with virtual mount points seems to me
so tedious to restore...
do you disk at first stage ¿? Or are you backing up to tape directly ¿?
My solution has been to force big files (>1GB at our site) to tape, but
spool small files on disk. You can do this with the MAXSIZE parameter on
your disk storage pool.
-- Skylar Thompson (skylar2 AT u.washington DOT edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S048, (206)-685-7354
-- University of Washington School of Medicine