Networker

Re: [Networker] any tips for backing up enormous sparse files?

2005-12-19 13:06:04
Subject: Re: [Networker] any tips for backing up enormous sparse files?
From: Robert Maiello <robert.maiello AT PFIZER DOT COM>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Mon, 19 Dec 2005 13:02:07 -0500
These are holey files according to uasm ?

" holey
          The holey ASM handles holes or  blocks  of  zeros  when
          backing  up  files  and  preserves  these  holes during
          recovery.  On some filesystems interfaces can  be  used
          to  find  out  the  location  of file hole information.
          Otherwise, blocks of zeros that are read from the  file
          are skipped. This ASM is normally applied automatically
          and does not need not be specified."

It sounds to me like networker applies this directive automatically but
needs to read all the blocks of 0 but not transfer them.   Not sure what 
the point is then.. Perhaps try specifying this directive anyway.

Pehaps EMC/Legato
can shed more light on it..it would be nice if it could read something in
the file and "realize" it only needs to read 192K?

Robert Maiello
Pioneer Data Systems





On Thu, 15 Dec 2005 17:30:54 -0600, Tim Mooney 
<mooney AT DOGBERT.CC.NDSU.NODAK DOT EDU> wrote:

>All-
>
>Server: Red Hat Linux AS 2.1, NetWorker 7.1.3 build 421
>Clients: various, for this example Red Hat Linux AS 4, NetWorker 7.1.3
>build 421
>
>
>On LP64 Linux clients, such as Red Hat Linux 4 on an EM64T system, certain
>sparse files are enormous.  The /var/log/lastlog file on many of our
>systems is 1.2 TB, though because it's sparse it's actually only using 92
>KB of disk space.
>
>Even though "save" and other NetWorker commands handle sparse files
>correctly, there's still a bit of an issue when dealing with a file like
>this.  When save backs it up, it opens it and happily starts reading the
>file in 64K chunks.  Even assuming save can read 256 MB a second this
>way (which it probably can't, because of the read size), if you do the
>math you'll see that it takes
>
>     1.2 TB == 1258291 MB
>
>     1258291 MB
>     ----------  = 4915 seconds
>      256 MB/s
>
>
>Or, about an hour and 20 minutes of reading 64K chunks of nulls, all to
>back up a file that has 92 KB of real data in it.
>
>How are others dealing with this?  Are you skipping files like this using
>"skip" or "null" directives?  Is there a way to increase the buffer size
>that "save" uses for reads?
>
>Tim
>--
>Tim Mooney                              mooney AT dogbert.cc.ndsu.NoDak DOT edu
>Information Technology Services         (701) 231-1076 (Voice)
>Room 242-J6, IACC Building              (701) 231-8541 (Fax)
>North Dakota State University, Fargo, ND 58105-5164
>
>To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the
>body of the email. Please write to networker-request AT listserv.temple DOT 
>edu 
if you have any problems
>wit this list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
>via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
>=========================================================================

To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the
body of the email. Please write to networker-request AT listserv.temple DOT edu 
if you have any problems
wit this list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER