Storage pool backup - FILE to LTO performance

BrianV

ADSM.ORG Member
Joined
Dec 13, 2011
Messages
83
Reaction score
3
Points
0
Location
USA
Hey all,

I recently configured a new TSM instance and setup the primary storage pools as FILE device classes on a Data Domain. I then setup copy pools on a TS3584 library using LTO4 drives. I have noticed that it takes a very long time for the storage pool backups to complete. Are there any recommended settings for the storage pools or dsmserv.opt when moving data from FILE device classes to LTO device classes during the storage pool backup process?

Thanks!

Server - 6.3.4.3
Platform Windows 2008 R2
 
There are 2 things that can help:
1 - spread the file devclass on multiple directories, residing on different LUNs. When TSM picks stratch tapes from a file pool, it goes from the 1st directory first, then the second, and so forth. So that spreads the workload across the directories.
2 - use a larger maxprocess in the backup stgpool command to read from multiple file volumes and write to multiple tapes simultaneously
 
The device classes are split up into different MTree's on the Data Domain. This isn't a SAN so it acts more like a NAS using CIFS. I also run the storage pools using maxpr=2 but it doesn't seem to help. I kicked off a backup stgpool 3 hours ago and onyl backed up ~250GB.
 
Hey all,

I recently configured a new TSM instance and setup the primary storage pools as FILE device classes on a Data Domain. I then setup copy pools on a TS3584 library using LTO4 drives. I have noticed that it takes a very long time for the storage pool backups to complete. Are there any recommended settings for the storage pools or dsmserv.opt when moving data from FILE device classes to LTO device classes during the storage pool backup process?

Thanks!

Server - 6.3.4.3
Platform Windows 2008 R2

If I were to do the setup, I will have a disk pool to act as buffer area for initial backups. Backup the daily data from the disk pool to the offsite tape pool then migrate to the online 'tape' (devclass=file).

This is a much faster approach since you can kick off multiple maxproc for the backup to offsite tape.

This is not a cheap solution but if you really want speed, I believe this is the way to go.

DEVCLASS=FILE is sequential and transferring to another sequential device will have high overhead from data consistency checks. This is why the process is slow.
 
Last edited:
So we have an existing TSM server utilizing the same Data Domain but it uses the VTL feature. The device class is setup as LTO and it backs up the storage pools to the same tape library. The throughput is much quicker and I'm struggling to understand why at this point. It's the same Data Domain, same disk, and same target.
 
So we have an existing TSM server utilizing the same Data Domain but it uses the VTL feature. The device class is setup as LTO and it backs up the storage pools to the same tape library. The throughput is much quicker and I'm struggling to understand why at this point. It's the same Data Domain, same disk, and same target.

This is something to do with network and CIFS.

We have devclass=file but using NFS since we are on Linux. When we first went with a DD solution, we asked IBM to write a special patch for TSM version 6.1.5.4 so the AIO on Linux is maximized. That was then.

The modification was eventually carried over to newer versions, and we are at 6.3.3 and all is OK.

I have mentioned in another post that I truly hate CIFS and NFS since these are totally unreliable. Unfortunately, DD does not file system sharing on direct Fiber connect. This is also why VTL is faster - it is on direct fiber connect.

In your case, I would suggest having separate NICs for all CIFS mounted disk.
 
This post is pretty old but I would highly recommend ditching the FILE storage pool and go with the VTL method of using the DD. As you've observed, it is much, much faster. That'll likely resolve your issue.
 
This post is pretty old but I would highly recommend ditching the FILE storage pool and go with the VTL method of using the DD. As you've observed, it is much, much faster. That'll likely resolve your issue.

I ended up building a second VTL instance in the Data Domain. Works much better writing to physical tape.
 
Back
Top