MediaWait status on nodes being backedup

spiffy

ADSM.ORG Member
Joined
Feb 9, 2007
Messages
374
Reaction score
1
Points
0
I have a question about how clients backup data is handled by the tsm.
I am still in at the bottom of the TSM learning curve here, as evident by my other posts, so bear with me if my questions seem trivial

Here is the scenario

First Storage Pool = 200GB diskpool - Migration to next storage pool happens once diskpool hits 80% capacity
Next Storage Pool = Tapepool
Next STorage Pool = Offsite TapeCopyPool

I have a bunch of clients being backed up nightly via a schedule that was created and points to a policy domain. Under Management classes, if you look at that policy domain that the schedule is associated with, the Migration Destination is set to the Diskpool

Here is my question, this morning I came in and did a q sess, saw a bunch of my nodes that had a status of MediaWait. i checked the Diskpool and it was at 46% utilized, plenty of room to move data to. I wanted to check if that client was set to go to disk or tape first, so I went to the TSM Web admin console, clicked on Policy Domains, Management Classes, found the Policy Domain Name that the client was associated with and saw that they all pointed to migration destination being the diskpool, so why would it be waiting for media if they are set to write to the diskpool that was not close to being full??

Also, when a migration of the diskpool to tape pool is happening, does it cause any nodes backing up to wait untiil the migration hits the low migration point, ends the process for the migration and then resume backing up to the diskpool?

thanks
James
 
No, nodes continue to backup to a diskpool that is above himig. But if a node runs out of space in the diskpool, it will switch to wanting to write to tape. Once it has made this switch, it won't switch back to the diskpool no matter what its %util is. This may have happened to you.
 
Ok, so this morning, I looked at the sessions, noticed the media wait status on some nodes, took a look at my stgpool - 46% used, took a look at my processes, only had two processes running, Tapepool to tape copypool migrations, no diskpool to tapepool migrations happening at all.
I have 3 drives dedicated to my tapepool, 2 to my tapecopypool offsite, so 2 of the 3 tapepool drives were used, which leaves one drive available for use.
during this time i kept looking at my sessions and the status of MediaWait was present, but no tape mounted in the available drive to continue processing the nodes.
30 minutes passed or so, and i noticed the MediaWait was gone from the nodes in question and they were writing again, but not to tape, because i still had my drive free, and my other two drives were still performing the tapepool to copypool migration, my actlog did not indicate any tape mounts for that drive during this time, but it appears that it started writing to the diskpool again.
I am still confused on why it would not write at 46% free diskpool, wait for media with a drive free (drive is working, online, path online), then all of the sudden continue writing to diskpool, instead of loading a tape up in the available drive (plenty of scratch tapes are present) and finishing the backup to the tape....

I am probably making something outta nothing, it all worked out in the end, the nodes completed their backups, migrations of diskpool to tapepool have happened, but i am perplexed on this one...

If i understand what was written below, then say NodeA was backing up to Diskpool, the diskpool hit 100% Capacity, NodeA switched over to backing up to the tapepool directly, Migration of diskpool kicked in sometime in the morning before I arrived, Diskpool hit below lowmig threshold and ended, so NodeA should still be writing to the tapepool, it could not switch back to backing up to diskpool??
 
Yes to your last para, that's right. But if the client switches to a new session for some reason that would let it go back to disk.

I'm more clutching at straws now, but there's a few other reasons I can think of for what you had
- Firstly, if it actually wanted to mount one of the tapes that was in use by the migration process, it would wait. Is there a MAXSIZE set on that disk storage pool?
-Is there a mountlimit on that *tape*pool?
- Also, if there's any sequential file (on disk) pools you can get mediaw on those too I think. For example if you have a sequential disk pool for directories for use with DIRMC. But for that to be an issue you'd need to have only 1 or 2 volumes in its mountlimit and be backing up a lot of directories... which would be unlikely I think.
- Did the actlog indicate that those sessions didn't mount any tapes at all? Not sure if they would be able to pre-empt a migration and temporarily steal one of those tapes.
 
No, nodes continue to backup to a diskpool that is above himig. But if a node runs out of space in the diskpool, it will switch to wanting to write to tape. Once it has made this switch, it won't switch back to the diskpool no matter what its %util is. This may have happened to you.

I have noticed with TDP for Domino that once it has switched to tape, if the diskpool has it's %util reduced below 100%, the TDP session will eventually switch back to the diskpool. Happens quite often with our system.
 
Back
Top