Hi everybody, Does anyone know what is the best way to make a filesystem backup than contains million files? Backup image is not posible because is a GPFS filesystem and is not supported. Thanks in a
Author: "Schneider, Jim" <jschneider AT USSCO DOT COM>
Date: Thu, 2 Feb 2012 08:47:24 -0600
Jorge, On Unix systems: I've done it in two steps. Create a tar file of the file system and zip it. Create a second file listing all the files in the tarred directory. The tar extract command allows
Is it organized into subdirectory trees? If so, virtual mountpoints might be a way to go. Gary Lee Senior System Programmer Ball State University phone: 765-285-1310 --Original Message-- From: ADSM:
Hi Jim, thank you very much for your answer. Actually we are doing what you say. Filesystem .tar. It was a great solution when the filesystem was 500Gb-1Tb but actually our filesystem is 14Tb. The ta
Be sure to check the mail archives (www.adsm.org or www.mail-archive.com/adsm-l AT vm.marist DOT edu/) for common issues having been discussed in the past. A lot of people have contributed a lot of i
Author: Rainer Wolf <rainer.wolf AT UNI-ULM DOT DE>
Date: Thu, 2 Feb 2012 16:31:04 +0100
Hi , it depends on how many changes happens daily and the other thing is how fast can the tsm-client scan the filesystem. We have some clients with > 30Mio files and they can scan with tsm more than
Author: Skylar Thompson <skylar2 AT U.WASHINGTON DOT EDU>
Date: Thu, 2 Feb 2012 08:11:54 -0800
One million files shouldn't be a problem with some planning; we have a GPFS filesystem with 31 million files that we can backup in 18 hours. We have some other non-GPFS filesystems with several times
GPFS, so use mmbackup with TSM, there is some white/blue/redpaper or a wiki somewhere about that. -- Met vriendelijke groeten/Kind Regards, Remco Post r.post AT plcs DOT nl +31 6 248 21 622
Author: "Robert A. Clark" <robert.a.clark AT DAIMLER DOT COM>
Date: Thu, 2 Feb 2012 08:58:10 -0800
I find that "incrbydate" is often a win in making the first pass across a filesystem with a large count of files to be backed up. Doing a "dirsonly" pass is sometimes a win as well. Thanks, [RC] jorg
Author: Howard Coles <Howard.Coles AT ARDENTHEALTH DOT COM>
Date: Thu, 2 Feb 2012 17:11:07 +0000
Have you tried using the memoryefficient=diskcache method? I'm assuming you have. Takes a while, but on fast systems I've had pretty decent results. You may need to go the virtual mount point route t
I'm able to backup several systems with 100's of millions of files with the diskcache method. If your backup is running that slow on a gpfs filesystem, you probably have a performance problem with th
Author: "Gretchen L. Thiele" <gretchen AT PRINCETON DOT EDU>
Date: Thu, 2 Feb 2012 15:23:01 +0000
If you are not using HSM, the virtual mountpoint approach is a good one. Since we do have an integrated TSM/HSM system on GPFS, we can't do that. It takes a bit over 3 days to wade through 130+ milli