ADSM-L

Re: [ADSM-L] ?anyone using TSM to backup Panasus PanFS?

2010-01-27 10:03:07
Subject: Re: [ADSM-L] ?anyone using TSM to backup Panasus PanFS?
From: "Evans, Bill" <bevans AT FHCRC DOT ORG>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Wed, 27 Jan 2010 07:02:18 -0800
We use TSM to backup our primary research data server.  It's a SUN sparc
server with 150TB of data.  51M files, 2M directories, 1+TB / day rate
of change.  It takes ~6-8 hours to run the backup on this.  As you can
expect, most of the time is scanning the filesystems to find changes.
The filesystem is Veritas vxfs.

Since the scan/backup time is well within our window of opportunity, it
is not a big deal.  As we grow, we will probably add capacity with more
drives and more server capacity (faster/more procs/more ram/etc).  TSM
keeps up with this very well.  I would really *hate* to ever have to run
a full backup on this beast.


Thanks,

Bill Evans
Research Computing Support
FRED HUTCHINSON CANCER RESEARCH CENTER

-----Original Message-----
From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On Behalf Of
James R Owen
Sent: Tuesday, January 26, 2010 3:10 PM
To: ADSM-L AT VM.MARIST DOT EDU
Subject: [ADSM-L] ?anyone using TSM to backup Panasus PanFS?

Yale uses Panasus PanFS, a massive parallel storage system, to store
research data generated from HPC clusters.  In considering feasibility
to backup PanFS using TSM,
we are concerned about whether TSM is appropriate to backup and restore:

  1. very large volumes,
  2. deep  subdirectory hierarchy  with 100's to 1000's of sublevels,
  3. large numbers of files within individual subdirectories,
  4. much larger numbers of files within each directory hierarchy.

Are there effective maximum limits for any of the above, beyond which
TSM becomes inappropriate to effectively perform backups and restores?

Please advise about the feasibility and any configuration
recommendation(s)
to maximize PanFS backup and restore efficiency using TSM.

Thanks for your help.
--
Jim.Owen AT Yale DOT Edu   (w#203.432.6693, c#203.494.9201, h#203.387.3030)

<Prev in Thread] Current Thread [Next in Thread>