Can you prevent TSM clients from writing directly to tape?

droach

ADSM.ORG Senior Member
Joined
Jan 7, 2008
Messages
239
Reaction score
13
Points
0
Location
Cut and Shoot, Texas
PREDATAR Control23

Is there a way to prevent a client from bypassing the diskpool and writing directly to tape? I would prefer that my TDP clients not directly mount a tape. For some reason my TDP clients occasionally skip the diskpool and suck up a tape drive. There appears (to me) that there is enough space in diskpool.

Is there any way to see the communication between the TDP client and TSM when they are determining how much data it is sending up and how much space is available in the diskpool?
 
PREDATAR Control23

Is there a way to prevent a client from bypassing the diskpool and writing directly to tape? I would prefer that my TDP clients not directly mount a tape. For some reason my TDP clients occasionally skip the diskpool and suck up a tape drive. There appears (to me) that there is enough space in diskpool.

Is there any way to see the communication between the TDP client and TSM when they are determining how much data it is sending up and how much space is available in the diskpool?

I have seen this happen a lot even if I think there is enough disk space. However, in reality, there isn't. There isn't space since the total size of the files to be backed up exceeds the disk pool space available.

The ways you can prevent this from happening are:

1. Create a huge disk pool anticipating the biggest size of data (collectively) that TSM will backup. If you have 100 nodes and each can potentially require 100 GB to backup at any time, create a 10TB disk pool. This could insanely be a big disk pool.
2. Do not create a migration pool - meaning, there is no next storage pool after the disk pool. This is NOT recommended.

So, you are left with option 1.
 
Last edited:
PREDATAR Control23

Perhaps set the max mount points to 0 for the client.

"upd no XXX maxmp=0".
I can still make a backup to the diskpool when trying thi.
 
PREDATAR Control23

Make sure that the disk pool doesn't have a maximum file size limit set that could force large files to go to the next pool.
Maximum Size Threshold: No Limit

I have run into this problem as well, and the only fixes ere to make the pool larger, or stagger backups so the pool was near empty before a large backup started.
 
PREDATAR Control23

Thanks everyone. So far no luck. The minimum file size is set to no limit. The diskpool is now at 10TB and for the past week the daily total backed up averages between 4-6TB. If TSM was operating "correctly", this should be plenty of diskpool space to avoid clients from calling for a tape. I have not tried Jeroen's suggestion yet.
 
PREDATAR Control23

Thanks everyone. So far no luck. The minimum file size is set to no limit. The diskpool is now at 10TB and for the past week the daily total backed up averages between 4-6TB. If TSM was operating "correctly", this should be plenty of diskpool space to avoid clients from calling for a tape. I have not tried Jeroen's suggestion yet.

For those clients that writes directory to tape, are you sure that the primary storage defined for the domain that these nodes reside in is not tape?
 
PREDATAR Control23

I have this very same issue issue with TSM for Virtual Enviroments 6.4 Vmware TDP. All my vm backups go directly to tape. I did a manual backing from the tsm proxy of a single vm of size of 20GB when the diskpool is at 34 percent full of 2.3TB diskpool. Thats 1.5TB of free space in the diskpool when I am did my testing. This was done outside of the backup window with 0 other backup session running, with no other process running, no expirations, no reclamation, and no migrations running period during this time. Maximum Size Threshold is set to no limit: No Limit. I have the tsm proxy node set to it's own doman, with it own policyset and managmentclass. I have the management class set in the dsm.opt to force to use it. Still everything dumps out directly to tape.
 
PREDATAR Control23

I bumped my diskpool to 12TB and that seems to have stopped the clients from requesting a drive. Seems like about 2X the amount that this server backs up on a daily basis. I still believe that someone (TSM server, or TDP client, or both) are not calculating their space requirements correctly. Oh well, just throw more storage at TSM and keep it happy.
 
PREDATAR Control23

I am in a same boat, though I have noticed that some backups go to diskpool and some to tape but I haven't noticed a pattern. Though at first the backups did go directly to disk, but then I wasn't running the whole cluster with one job. Now as I am running incforever the incrementals are going straight to tape.

Has anyone made a ticket with IBM about this?
 
PREDATAR Control23

Well, looks like I have found the problem in my enviroment, I had misunderstood the function of "VMCTLMC" option in the dsm.opt. That is about the most crucial thing one needs to set but it is rather hidden IMO. I had thought that TOC filed in the MGMT class was the same thing but oh no it wasn't.

VMCTLMC option is used to decide where to store .CTL (control file) files. If it is not set correctly, control files will go to the main backuppool and then migrated to tape. At that stage VE starts to act funny.

You need to create a diskpool that will not be migrated to tape. Then create new MGMT class for the policy domain you use for VE backups and set the diskpool as the backup destination. Then add "VMCTLMC <new MGMT class>" in the dsm.opt of the datamover node.

I just did the changes so I can't be sure everything is fixed but so far so good :)
 
PREDATAR Control23

Hi , i have this same problem
all nodes backup goes to tape instead to diskpool , i did the change VMCTLMC with no hopes .
 
PREDATAR Control23

Hi , i have this same problem
all nodes backup goes to tape instead to diskpool , i did the change VMCTLMC with no hopes .
Did you restart the CAD/scheduler after the change?
Does the management class you used for VMCTLMC point to a disk pool?
Does that disk pool have a tape pool in the NextPool? If so, that is your problem.
Also, are you sure you updated the right .opt file?

Worth nothing that VMCTLMC only sends control files to that management class. Have to use VMMC or "include.vm VMname MC" to send the actual data files to that management class.
 
PREDATAR Control23

Hi Marclant,
thanks for quick reply, i restarted the cad a,d VMCTLMC point to the rigjt diskpool , but the only problem is my diskpool have nextpool the tape pool .
do you think is not recomended to update this element ?
 
PREDATAR Control23

It depends. Typically no, you'd want a new diskpool with no nextpool, create a new management class that goes to that new pool, and use that as VMCTLMC.

You can remove the next pool from your existing disk pool, but that means it will never get migrated to tape. Only remove it if it's that what you want for that storage pool.
 
PREDATAR Control23

I'm not following this issue at all. It's pretty basic that you can direct TSM to backup to either disk or tape based upon how you set up management class. If you set it to write to tape it will write to tape. If you set it to write to disk you either need to make sure you have enough disk or no max size parameter specified.
 
PREDATAR Control23

I'm not following this issue at all. It's pretty basic that you can direct TSM to backup to either disk or tape based upon how you set up management class. If you set it to write to tape it will write to tape. If you set it to write to disk you either need to make sure you have enough disk or no max size parameter specified.

If there is insufficient free space in the first storage pool to accommodate the client's backup, the TSM server will redirect the TSM client to send its data to the 'Next Storage Pool' .

A typical configuration for storage pools is for the first pool to be disk-based, and the next pool to be tape-based. If the disk pool is full TSM will start allowing the clients to send their data to the next pool. If that next pool happens to be a tape-based pool, TSM starts giving individual clients tape drives to write to. It isn't a pretty site...

The only way I was able to prevent this was to remove the 'Next' storage pool from the configuration and set up copy jobs to 'copy' the data to my tape pools instead of 'migrating' the data.
 
Top