a) Large servers with small filesize should always go to a random diskpool.
Sending them to tape (or a sequential filepool) will most certainly reduce
performance. So NO, keep sending those small files to a random disk pool. If
it's not big enough, increase the size, dont try sending the data somewhere
b) For ALL storage pools, there will be 1 migration process per node. It doesnt
matter if it's random, sequential or a CD. It's always gonna be 1 process per
c) Migration processes is usually not the issue. The issue is somewhere else.
So dont get blind on the 1 migration process per node. If you're having
performance issues, it's most likely not the migration process.
d) If you get I/O wait, the disks containing your disk pools is most likely not
properly configured. Remember that the basic idea of performance for a random
disk pool is to have enough spindles (as in, having enough harddrives in
whatever disksystem you're using). The only way to long-term increase the
performance is to increase the amount of spindles. The disksystem's memory
cache is always helpful, but when it gets filled (and it will) it will need to
empty to physical spindles. If your disks cant handle that load, you have a
e) When you increase your diskpools to 10TB, make sure that the increase is
done over new spindles (harddrives) and not the same you're already using.
Expanding the array/lun across the already used spindles wont increase your
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparrman AT exist DOT se
Posthusgatan 1 761 30 NORRTÄLJE
-----"ADSM: Dist Stor Manager" <ADSM-L AT VM.MARIST DOT EDU> skrev: -----
Till: ADSM-L AT VM.MARIST DOT EDU
Från: amit jain
Sänt av: "ADSM: Dist Stor Manager"
Datum: 03/15/2012 17:46
Ärende: Re: migration threads for random access pool and backups issue
Thanks to all for these valuable inputs. Appreciate it a lot.
Well this is first time this data is getting backed up.
For now here is what I was not aware and what I have done: 1. On Random
Access pools multiple migration sessions can be generated, only if we
backup on multiple nodes ? Is my understanding correct or there is any way
to increase the number of tape mounts ?
Now i know: Migration processes depends on nodes. random access storage
has limitation in regards to migration process. If I had more than 2
nodes backing up to the same random access storage pools then I could have
more than two migration processes depending on the configuration settings.
If the disk pool gets filled up, data goes to next pool and backups wont
As in our environment we have large small number of files so file type
disk pool is not a good idea. Improving backend speed does not always work
better. Because the speed coming into TSM server will not be fast enough
or sometimes equal or little bit better compared to the speed dumping data
from disk to tape. This all depends on the type of data to back up. If
there are huge number of files, One migration process is good enough and
have 2 or 3 more additional tape drives allocated to direct backup when
disk pool is overflow. That was much much faster than using file type
devclass with multiple migration processes.
Currently able to backup ~4TB a day. I will be increasing the stg pool size
to 10 TB and hope i will get better performance. These is also bottleneck
from the TSM client side. Seeing IO wait from the client side.
On Sat, Mar 10, 2012 at 9:08 PM, amit jain <amit12.jain AT gmail DOT com> wrote:
> I have to backup large data ~ 300TB and have small DISK POOL SIZE 500GB. I
> have 3 filespaces, backing up on single node. I am triggering multiple
> dsmc, dividing the filespace on directories. I have 15 E06 Tape drives
> and can allocate 5 drives for this backup.
> If I run multiple dsmc sessions the, server starts only one migration
> process and one tape mount.
> As per ADMIN GUIDE the Migration for Random Access is "Performed by node.
> Migration from random-access pools can use multiple processes."
> My Question:
> 1. On Random Access pools multiple migration sessions can be generated,
> only if we backup on multiple nodes ? Is my understanding correct or there
> is any way to increase the number of tape mounts ?
> 2. The only way to speed up with current resources is to backup to File
> device class, so that I can have multiple tape mounts?
> 3. Any inputs to speed up this backup?
> Server and client both are on Linux, running TSM version 6.2.2
> Any suggestions are welcome.