lipi
ADSM.ORG Member
- Joined
- Jan 14, 2015
- Messages
- 46
- Reaction score
- 0
- Points
- 0
Hi,
I have 23 LTO-4 drives attached to my TSM server and a filesystem with 500TiB used storage under GPFS. I have also 2 servers dedicated to backup data. All 3 servers are connected through 10Gbps links (1 for each).
In TSM server I have local raid 6 disk storage, giving 6Gbps or less (main bottleneck). For the part of the drives I get 1.4Gbps per drive (here bottleneck is drive, not network).
I am using mmbackup for the first time, and I want to perform a backup of the full filesystem. I tried different parameters but during execution I get the following errors:
a) 03/21/16 11:29:42 ANS1311E Server out of data storage space
b) 03/22/16 08:05:23 ANS0326E This node has exceeded its maximum number of mount points.
My main bottleneck is the disk. It is a storagepool of 12TiB that, when it's full, I receive the ANS1311E. I always have a migration process running. I see that when this happen, maxnump (16 for my client) is reached and data begins to go directly to drive (this is a good thing).
For a), I suspect that given a certain point where all sessions are attached directly to a drive, if one session finishes and the migration process managed to do some migration, e.g disk stg pool is at 99% and not 100%, a new session is raised and begins to write in the disk stgpool, receiving an "ANS1311E" after some minutes.
For b), I don't know why I get it.
Tried:
Total dsmc threads = 4, RESOURCEUTILIZATION=10, nummp=16,
Total dsmc threads = 8, RESOURCEUTILIZATION=6, nummp=16, 27 sessions seen (current)
Total dsmc threads = 12, RESOURCEUTILIZATION=10,nummp=16
Total dsmc threads = 24, RESOURCEUTILIZATION=10, nummp=12, 125 sessions seen (was first try)
maxsessions=255
maxnummp for client=16
In current try, I assumed that 8th*2 consumer resources = max. 16 mountpoints used, but I still see message b).
If i reduce number of dsmc threads, I lose performance. What I want is all to go directly to 16 drives and skip disk if possible.
Or what is your best approach in this case to maximize performance while avoiding errors? Now I get between 6 and 8 Gbps (pretty good).
I have 23 LTO-4 drives attached to my TSM server and a filesystem with 500TiB used storage under GPFS. I have also 2 servers dedicated to backup data. All 3 servers are connected through 10Gbps links (1 for each).
In TSM server I have local raid 6 disk storage, giving 6Gbps or less (main bottleneck). For the part of the drives I get 1.4Gbps per drive (here bottleneck is drive, not network).
I am using mmbackup for the first time, and I want to perform a backup of the full filesystem. I tried different parameters but during execution I get the following errors:
a) 03/21/16 11:29:42 ANS1311E Server out of data storage space
b) 03/22/16 08:05:23 ANS0326E This node has exceeded its maximum number of mount points.
My main bottleneck is the disk. It is a storagepool of 12TiB that, when it's full, I receive the ANS1311E. I always have a migration process running. I see that when this happen, maxnump (16 for my client) is reached and data begins to go directly to drive (this is a good thing).
For a), I suspect that given a certain point where all sessions are attached directly to a drive, if one session finishes and the migration process managed to do some migration, e.g disk stg pool is at 99% and not 100%, a new session is raised and begins to write in the disk stgpool, receiving an "ANS1311E" after some minutes.
For b), I don't know why I get it.
Tried:
Total dsmc threads = 4, RESOURCEUTILIZATION=10, nummp=16,
Total dsmc threads = 8, RESOURCEUTILIZATION=6, nummp=16, 27 sessions seen (current)
Total dsmc threads = 12, RESOURCEUTILIZATION=10,nummp=16
Total dsmc threads = 24, RESOURCEUTILIZATION=10, nummp=12, 125 sessions seen (was first try)
maxsessions=255
maxnummp for client=16
In current try, I assumed that 8th*2 consumer resources = max. 16 mountpoints used, but I still see message b).
If i reduce number of dsmc threads, I lose performance. What I want is all to go directly to 16 drives and skip disk if possible.
Or what is your best approach in this case to maximize performance while avoiding errors? Now I get between 6 and 8 Gbps (pretty good).