FILE dev class volumes vs DISK dev class volumes

Mikky83

Newcomer
Joined
Sep 13, 2018
Messages
2
Reaction score
0
Points
0
Hi Guys,

This is the current configuration:
Standard simple storage pool defined on block storage. dev class is FILE. This is the primary storage pool. Next storage pool is on tapes. Migration is kind of bottle neck in the current configuration. I was wondering whether could be a better option to use volumes from DISK dev class.

What is better in terms of performance of migrating data to the next storage pool, regarding volume creation on block storage disks:
volumes defined with FILE dev class or volumes defined with DISK dev class?

Thanks,
 
File is better because it's using a larger block size. It's also better to use defined volumes instead of scratch to avoid fragmentation. It might become less important if empty the pool daily by migrating it to tape. Also make sure you have 10 to 30 filesystems each backed by it's own LUN on your storage device and that they are all the same size to spread the load evenly.

More info here: https://www.ibm.com/docs/en/spectru...performance-checklist-storage-pools-disk-file
 
What leads you to say disk is your bottle neck for efficient migration?
I for several years was struggling with disk to tape performance. Turns out it was the SAN fabric.

If you are willing, please share your system specs including os type, your storage specs and layout, infrastructure from server to storage array, and from server to tape drives.

In my experience, the first big gain was separating disk HBA's and tape HBA's on the server. fcs1-4 to drive disk, then fcs5 and fcs6 to drive tape devices. This allows you to tune your adapter settings to match each device. Next is if all connected to same switch, make sure you are not running out of buffer credits.

Next once I isolated storage and tape, I could really push my storage but not my tape devices. that lead me down the path of checking all the ISL's between the tsm san switch to the tape device switch. That uncovered some configuration changes that no one really noticed, because no one was pushing 600MB or more down the wire.

In the end, what used to take 8+ hours for backup stgpool/migrate now finishes in under 4 hours most days.

Not saying I will see anything that's a smoking gun, but might spark a few places to look a bit closer at.
 
Back
Top