BackupPC-users

Re: [BackupPC-users] 100,000+ errors in last nights backup

2009-08-19 00:05:55
Subject: Re: [BackupPC-users] 100,000+ errors in last nights backup
From: Adam Goryachev <mailinglists AT websitemanagers.com DOT au>
To: Holger Parplies <wbppc AT parplies DOT de>
Date: Wed, 19 Aug 2009 14:02:40 +1000
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Holger Parplies wrote:
> Hi,
> 
> Adam Goryachev wrote on 2009-08-13 15:42:26 +1000 [Re: [BackupPC-users] 
> 100,000+ errors in last nights backup]:
>> [...]
>> I've frequently managed to cause two backuppc_dump's to run in parallel
>> where one was scheduled by backuppc and one was run manually by me from
>> the command line. It would be nice if backuppc_dump could do a simple
>> check to see if the new directory already exists, and if so, simple exit
>> (or error and exit).
> 
> while a check would be possible, it's not quite as simple as that. What
> happens when the machine crashes during running backups? The new/ directory
> won't disappear by itself (well, BackupPC could move all new/ directories to
> trash on startup, but, according to your logic, you might just be running
> BackupPC_dump manually ...). File locking? Put up a big sign "don't run
> BackupPC_dump manually unless you know what you are doing"? ;-)

Of course, but programs shouldn't really be designed around what happens
when a system crashes (though they should try and handle it well).

A simple failure message if the new directory exists telling the admin
to rm -rf backuppc/pc/host/new or something to that effect would be
sufficient....

>> Mainly I run backups manually so I can see exactly what is happening
>> during the backup and where/why it is failing or taking so long.
> 
> Maybe there should/could be a way to serverMesg BackupPC to do a backup for a
> specific host with a -v switch and verbose logging directed to a specific file
> (i.e. make BackupPC_dump -v take a file name argument and pass that over via
> the BackupPC daemon). Please remind me in about two weeks ;-).

Well, I think there is already the ability to increase the log level,
and hence see more information in the log, but this has two issues:
1) I don't really want to modify the config to increase the log level, I
only want it to apply for the current run.
2) The existing logs are not flushed per line, they are only flushed
after a certain amount of bytes (probably per buffsize or whatever it is).

So, perhaps this could be achieved by fiddling the loglevel value with a
parameter, which could also force the log files to flush after each
line. I am pretty sure perl has a method to set this (per line flush or
buffer size flush) which can be set after the file open() and applies
until the file close()

PS, I know it hasn't been two weeks, but thought the above would be
easier to implement...

Regards,
Adam

- --
Adam Goryachev
Website Managers
www.websitemanagers.com.au
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkqLeWAACgkQGyoxogrTyiVJHwCdEOgs6aT/Wopku3NLN+ErFsNa
6EIAn2qoMeEzF6BwrNLuhkaZ7OZN8ByD
=k+qX
-----END PGP SIGNATURE-----

------------------------------------------------------------------------------
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

<Prev in Thread] Current Thread [Next in Thread>