On 05/08/16 04:52, martin f krafft wrote:
> also sprach Adam Goryachev <mailinglists AT websitemanagers.com DOT au>
> [2016-08-04 16:04 +0200]:
>> I've used ls -l /proc/pid/fds or strace or lsof etc... all work,
>> some are better on the client rather than the backuppc server.
> In fact, I found none of those useful on the server.
>
>> I've also used tail -f XferLOG | Backuppc_zcat which does work,
>> but doesn't update in real time (ie, you have to wait for a number
>> of lines of log output before you see the update.....
> I've tried this, but I get:
>
> /usr/share/backuppc/bin/BackupPC_zcat: can't uncompress stdin
>
> This is using BackupPC 3.3.0 (Debian stable)
Sorry, I've not used BPC 3.x in years...
Maybe try this:
tail -f -n +0 blah.log | /usr/share/backuppc/bin/BackupPC_zcat -
You need to include the beginning of the file or else it won't detect
the compression header.... Also, the - frequently means use stdin when a
filename parameter is required, it may or may not be needed.
>
>> Not sure of a "better" way.... Backuppc 4.0 includes a counter for
>> number of files xfered though that doesn't help for BPC 3.x
> The counter isn't really that useful, I think, especially not if it
> doesn't have a "X of Y files" total that doesn't change (cf. rsync,
> which is kinda useless, as the total keeps increasing).
It includes the total number of files from the previous backup... so
generally it is pretty useful (unless the client has added a huge number
of files in between backups, or you are stuck backing up a single huge
file, and then it looks like there is no progress. Perhaps a better
indicator would be based on MB's processed compared to the size of the
previous backup. I'm sure patches are welcomed :)
> The more I think about it, the more I want XferLOG
> uncompressed/unbuffered, but also structured in a way so that it
> starts a new line when it inspects a file, and then finishes the
> line with details and the verdict (same, create, link, …)
Feel free to write a patch to do what you want, but I expect patches to
BPC v3.x are unlikely to be added at this stage, unless they fix actual
problems (ie, preventing backups from working).
Remember in the majority of cases, you won't be watching backups, they
are something that *just happens*, and later you will come along and
verify they did happen, or restore some files. So "watching" a backup in
progress isn't a high priority...
>
> also sprach Tony Schreiner <anthony.schreiner AT bc DOT edu> [2016-08-04
> 15:52 +0200]:
>> Also on the backup host, you can get the process id of the current dump
>> processes (there will be two per host during file transfer), and do
>>
>> (sudo) ls -l /proc/{pid1,pid2}/fd
>>
>> if a file is being written to backup it will show up in this list. But be
>> aware that there are times (sometimes long) when files are not being written
> What happens during those times?
Backing up a single large (modified) file requires the server to
de-compress the original file, and then add the changes from the remote.
I'm not sure why, but BPC v3 seems to be rather in-efficient at this
process. This is one of the reasons I tend to split large remote files
on the remote side prior to BPC (eg, VM images, sql dumps, etc), (the
other reason is that most chunks will be the same, and so it saves disk
space on the BPC side, improves rsync bandwidth consumption, etc).
Regards,
Adam
--
Adam Goryachev Website Managers www.websitemanagers.com.au
------------------------------------------------------------------------------
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
|