DSMSCOUTD problems

mbtoys

Newcomer
Joined
Dec 4, 2007
Messages
2
Reaction score
0
Points
0
Goodday,

The following issues arise when configuring HSM 5.5.1 on GPFS 3.2 on linux RH5.

I cannot get it to automigratie his files.
I Think the reason for this is that the dsmscoutd never gets initial metadata file ready. Strange thing about this is that i am using HSM on 12TB disk and the metdatafiles allready use by now 500GB of space :s.

If I look at the documentation it gives you formula to callculate the needed space for metadata at initial dsmscoutd scan:

#inodes/8*1024

When i do a df -i i get:
13351445/8*1024=1.708.984.960 bytes
Meaning that i need 1.7 GB reservered for metadata instead of the +/-500 gig of metadata.

Did someone see this behaviour and if so how did you fix it?
 
Re: dsmscoutd problem

This is a known problem which is currently under investigation by HSM development (no fix yet).

It seems to occur with large file systems (> 255 GB depending on the block size defined).

I will post an update in this forum when I have more information about the code level and schedule for the fix.
 
Re: dsmscoutd problem

This issue has been addressed with APAR IC58088 and is fixed with 5.5.1.10 and higher levels.
 
Deleted metadata files

To temporarily fix my automigration problem, I stopped the dsm processes, and deleted the <managed file space>/.Spaceman/metadata/meta1 and meta2 files. When I restarted the dsmscoutd process, it went and scanned the entire managed file space and rebuilt those metadata files. They were the same size as before, but both manual and automigrate began working again. The rebuilding took half an hour on a pretty busy system(The users noticed!)

This was a temporary fix for me, I intend to apply the updated HSM client 5.5.2.10 as soon as I can. IBM told me that the updated version is more efficient at creating those metadata files so they don't get so big.
Steve
 
How to speed up automigration, and reduce the CPU load.

Per default the CFI (meta data) file is large in order to enable that all files (one for each inode) can be automigrated. For example a JFS2 with 2 TB capacity has a CFI file with about 80 GB in size. If you plan for let's say "only" 20 mio files then the CFI can be reduced to about 3 GB.

Thus the search (in the CFI) for new migration candidates will become more than 20 times faster!
Automigration will be quicker.
Less CPU cycles will be used by HSM (the dsmscout daemon).

How to reduce the CFI file size?
The answer depends on the HSM version used.

HSM 5.5
With 5.5.2.4 we introduced a new option for the dsm.opt file named MAXFILESINFS. You can specify the maximum number of files which can be automigrated per file system using this option (see http://www.ibm.com/support/docview.wss?uid=swg1IC67163 )
It applies to all file systems on your "box" the same way. So if you have more than one HSM file system on a "box" then specify this option for the "largest".
For example: you plan for 20 mio files which can be automigrated at a maximum per file system, then specify
MAXFILESINFS 20000000
in your dsm.opt file.
Afterwards
1. stop the dsmscout daemon (scoutd): "dsmscoutd stop"
2. delete the meta data files: "rm -rf /file_system_spec/.SpaceMan/metadata"
3. start the scoutd again: "dsmscoutd"
The scoutd will then scan the file system and recreate the deleted meta data files.

HSM 6.1.3 and higher
You can specify the maximum number of files which can be automigrated for each file system individually. Use the"-maxfiles" option for the "dsmmigfs add" or "dsmmigfs update" command. For example invoke "dsmmigfs update -maxfiles=20000000 /file_system_spec" to enable the automigration of 20 mio files at a maximum for /file_system_spec.
Afterwards you need to stop the scoutd, delete the meta data files (like described above) and restart the scoutd again.
See http://www-01.ibm.com/support/docview.wss?uid=swg27013725 for the corresponding HSM 6.1 manual update.
 
I'm not familiar with 'mio'

What is the "mio" you are referring to?
Thank you!
Steve
 
Back
Top