On Thu, May 01, 2003 at 04:55:13PM -0600, George Kelbley wrote:
> I just ran into a problem I've never seen before trying to restore files
> from a large file system. I'm not sure but it appears that either tar
> or gzip is unhappy about the size of the file created by amanda during
> amdump, but amdump is running w/o errors. The files system is 51GB
> total, with lots of user directories inside. When I try to extract
> files via amrecover it finds them fine in the indexes and points me to
> the correct tape, but once it start to extract the files I invariable
> get:
>
> tar: Skipping to next header,
>
> which isn't very helpful. If I use amrestore, it fails with :
>
> gzip: stdout: File too large
> Error 32 (Broken pipe) offset 1693417472+32768, wrote 0
> amrestore: pipe reader has quit in middle of file.
> amrestore: skipping ahead to start of next file, please wait...
>
> This is running on a linux amanda server, 2.4.2p2-4, the os is debian
> linux 3.0, kernel is 2.4.18.
>
You are getting close to 2GB. Some file systems have a limit for the
largest file that can be created, often 2GB. Is your file system so
constrained?
> Any ideas greatly appreciated.
>
> --
> George Kelbley System Support Group
> Computer Science Department University of New Mexico
> 505-277-6502 Fax: 505-277-6927
>
>>> End of included message <<<
--
Jon H. LaBadie jon AT jgcomp DOT com
JG Computing
4455 Province Line Road (609) 252-0159
Princeton, NJ 08540-4322 (609) 683-7220 (fax)
|