Author: Lord Sporkton <lordsporkton AT gmail DOT com>
Date: Thu, 25 Apr 2013 14:45:50 -0700
I'm currently backing up mysql by way of dumping the DB to a flat file then backing up the flat file. Which works well in most cases except when someone has a database that is bigger than 50% of the
Author: Sabuj Pattanayek <sabujp AT gmail DOT com>
Date: Thu, 25 Apr 2013 16:53:00 -0500
Does your mysql db live on a unix system? If so, why not use automysqlbackup and just have it dump to your backup system over NFS? On Thu, Apr 25, 2013 at 4:45 PM, Lord Sporkton <lordsporkton AT gmai
Author: Lord Sporkton <lordsporkton AT gmail DOT com>
Date: Thu, 25 Apr 2013 15:35:20 -0700
-- Try New Relic Now & We'll Send You this Cool Shirt New Relic is the only SaaS-based application performance monitoring service that delivers powerful full stack analytics. Optimize and monitor yo
Author: Mark Rosedale <mrosedale AT vivox DOT com>
Date: Thu, 25 Apr 2013 18:44:19 -0400 (EDT)
It is on linux yes. That is not out of the question, but it would be preferred for management purposes to do it though the command in backuppc. Its not just one server, its dozens and growing. Its a
Author: Lord Sporkton <lordsporkton AT gmail DOT com>
Date: Thu, 25 Apr 2013 16:09:43 -0700
-- Original Message -- From: "Lord Sporkton" <lordsporkton AT gmail DOT com> To: "General list for user discussion, questions and support" <backuppc-users AT lists.sourceforge DOT net> Sent: Thursda
Author: Sabuj Pattanayek <sabujp AT gmail DOT com>
Date: Thu, 25 Apr 2013 20:29:58 -0500
Please show an example of where you can stream data directly into tar -- Try New Relic Now & We'll Send You this Cool Shirt New Relic is the only SaaS-based application performance monitoring servic
Author: Les Mikesell <lesmikesell AT gmail DOT com>
Date: Thu, 25 Apr 2013 23:39:47 -0500
On Thu, Apr 25, 2013 at 6:09 PM, Lord Sporkton <lordsporkton AT gmail DOT com> wrote: I don't think you are going to find a way to get backuppc to collect the output stream directly. However you coul
Author: Adam Goryachev <mailinglists AT websitemanagers.com DOT au>
Date: Fri, 26 Apr 2013 15:15:34 +1000
I think the issue is that backuppc expects tar to not just provide a stream of data, but a list of filenames with the data for each file included. If you pipe the data into tar, I'm not sure that tar
Author: Lord Sporkton <lordsporkton AT gmail DOT com>
Date: Thu, 25 Apr 2013 23:24:23 -0700
I think the issue is that backuppc expects tar to not just provide a stream of data, but a list of filenames with the data for each file included. If you pipe the data into tar, I'm not sure that tar
Author: Arnold Krille <arnold AT arnoldarts DOT de>
Date: Fri, 26 Apr 2013 22:27:44 +0200
I don't know about your rates, but here in europe a new 2TB-disk costs less then me thinking and trying to implement anything like this. However, the idea seems interesting (hobby isn't always about
Author: Timothy J Massey <tmassey AT obscorp DOT com>
Date: Fri, 26 Apr 2013 16:39:45 -0400
Arnold Krille <arnold AT arnoldarts DOT de> wrote on 04/26/2013 04:27:44 PM: flat cases the middle with to <Speculation about getting data to BackupPC snipped.> the host I second this. I usually have
Author: Lord Sporkton <lordsporkton AT gmail DOT com>
Date: Fri, 26 Apr 2013 13:46:25 -0700
As mentioned we have multiple customers and departments. It's not just one server. Also 50g data bases aren't the largest. We have ones upto 200g individual dbs. Also we're using scsi drives which co
Lord Sporkton wrote at about 13:46:25 -0700 on Friday, April 26, 2013: I don't think BackupPC is the write solution for backing up regularly changing files (like databases) that are 200GB. First, you
Author: Timothy J Massey <tmassey AT obscorp DOT com>
Date: Fri, 26 Apr 2013 17:20:13 -0400
<backuppc AT kosowsky DOT org> wrote on 04/26/2013 05:04:07 PM: if Which is something *else* I do (for, say, NTBACUP or Windows Server Backup), again by using that NFS or Samba share on my BackupPC s
Author: Les Mikesell <lesmikesell AT gmail DOT com>
Date: Fri, 26 Apr 2013 16:29:10 -0500
On Fri, Apr 26, 2013 at 3:46 PM, Lord Sporkton <lordsporkton AT gmail DOT com> wrote: I think you are missing what people are saying. Throw some big, cheap SATA drives somewhere, on any convenient bo
Timothy J Massey wrote at about 17:20:13 -0400 on Friday, April 26, 2013: My point is that even with o(100) files/copies which assuming you are backing up multiple versions means you have far fewer d
Les Mikesell wrote at about 16:29:10 -0500 on Friday, April 26, 2013: Precisely... if you just have a few (as in o(100)) large database files then you don't need all the complexity of BackupPC which
Author: Les Mikesell <lesmikesell AT gmail DOT com>
Date: Fri, 26 Apr 2013 17:47:40 -0500
An interesting - and simple - approach that might work would be to feed the uncompressed DB dump to split (in a unique directory per run) and then look at how much pooling backuppc can do with the re
Author: Timothy J Massey <tmassey AT obscorp DOT com>
Date: Fri, 26 Apr 2013 21:05:20 -0400
<backuppc AT kosowsky DOT org> wrote on 04/26/2013 06:27:32 PM: are I get your point, though I would ask you to define "better"... of cron a Why not cron? There's no Web GUI, it doesn't expire old ve