Hi,
I do not think this is task for any "normal" backup solution (not to mention
PanFS is possibly not supported by any). With these specification you may
easily exceed any limit of filename/path length backup software has. Scanning
the filesystem can take hours (if only!) even before single byte is transferred
- huge number of files is a killer for any backup solution. What is your RPO
and RTO? What is the purpose of the backup and granularity required?
Without knowing anything more about your environment it seems to me that
replication (possibly synchronous) between two sites and volume block level
backup (what VERY LARGE means?) is what you end with .... crystal balls are
scarce these days :)
Harry
________________________________
From: James R Owen <Jim.Owen AT YALE DOT EDU>
To: ADSM-L AT VM.MARIST DOT EDU
Sent: Wed, January 27, 2010 12:09:53 AM
Subject: [ADSM-L] ?anyone using TSM to backup Panasus PanFS?
Yale uses Panasus PanFS, a massive parallel storage system, to store research
data generated from HPC clusters. In considering feasibility to backup PanFS
using TSM,
we are concerned about whether TSM is appropriate to backup and restore:
1. very large volumes,
2. deep subdirectory hierarchy with 100's to 1000's of sublevels,
3. large numbers of files within individual subdirectories,
4. much larger numbers of files within each directory hierarchy.
Are there effective maximum limits for any of the above, beyond which
TSM becomes inappropriate to effectively perform backups and restores?
Please advise about the feasibility and any configuration recommendation(s)
to maximize PanFS backup and restore efficiency using TSM.
Thanks for your help.
--
Jim.Owen AT Yale DOT Edu (w#203.432.6693, c#203.494.9201, h#203.387.3030)
|