Large File Backup in Windows

Andrew21210

ADSM.ORG Member
Joined
Apr 10, 2008
Messages
97
Reaction score
2
Points
0
PREDATAR Control23

I have a Windows cluster that backs up some large flat file database backups. There are several files that are anywhere from 500GB to 1TB in size with the typical total size of the backup being 2 to 3TB. Is there any way that I could reduce the backup time for this cluster? The resourceutilization is set to 10 and the maxnummp is set to 8 but these don't come into play when there is a single 1TB file being backed up. Could I get better performance by playing with tcpwindowsize, txnbytelimit and/or tcpbuffsize parms on the client?
 
PREDATAR Control23

Could I get better performance by playing with tcpwindowsize, txnbytelimit and/or tcpbuffsize parms on the client?
Possibly, but probably not to the point where you need it to be. I'd just try to tweak the tcpwindow size, not as much to gain with the others.

If you're backing up to a container pool, I'd consider client-side dedup and compression, that would cut down the amount of data to send to the server.
 
PREDATAR Control23

Possibly, but probably not to the point where you need it to be. I'd just try to tweak the tcpwindow size, not as much to gain with the others.

If you're backing up to a container pool, I'd consider client-side dedup and compression, that would cut down the amount of data to send to the server.


Backing up to VTL so no dedup at the moment. The cluster is Windows Server 2012 which has tcp window scaling set to auto by default. I've set the tcpwindowsize to 256 on the client which, I believe, will override the OS with regards to the backup.
 
PREDATAR Control23

Backing up to VTL so no dedup at the moment. The cluster is Windows Server 2012 which has tcp window scaling set to auto by default. I've set the tcpwindowsize to 256 on the client which, I believe, will override the OS with regards to the backup.
Regarding TCPWindowsize, a value of 0 will use the OS value, a value that is not 0 will use what you specify.
 
PREDATAR Control23

If its a VTL backup, then try TXNBYTELIMIT 10G This is noted in the 7.1 perf tuning guide pdf.
 
PREDATAR Control23

Are you trying to do client side compression? A few of our large flat files (2TB or so) turned off compression for them. That way its just sending the data down the pipe instead of chewing cpu cycles and time trying to compress the data before sending it. That took my backup window from 14 hours down to 9 hours. Not a lot but certainly helps.

How much bandwidth do you have coming into the TSM server? How much is available on the client? How much of either is being used during the backup window of this client? No matter how you slice it, if you have a 2TB file and only a 1GB Ethernet connection and assuming you hit the absolute max transfer speed of 1GB you are looking around about 5 hours of transfer time. Now if you have 10GB and your disk's (source and TSM) can handle the iops, you should be able to transfer 2TB in about 25 minutes give or take. (All napkin math above by the way)

Also, you may want to bump your maxnummp to 12 if the resourceutilization is at 10. I always found setting maxnummp two values higher than resourceutilization helps prevent 'This node has exceeded its maximum number of mount points'. That said, I've noticed the biggest performance improvement with resourceutilization is you have many different filesystems. If its just two or three, you might not go above 6ish.
 
Top