Bacula-users

Re: [Bacula-users] bacula to S3 bucket

2012-10-02 12:29:18
Subject: Re: [Bacula-users] bacula to S3 bucket
From: Josh Fisher <jfisher AT pvct DOT com>
To: bacula-users AT lists.sourceforge DOT net
Date: Tue, 02 Oct 2012 12:26:19 -0400
On 10/2/2012 11:08 AM, Tim Dunphy wrote:
> Hey Guys,
>
>  I remember back when I was an Amanda Backup user, one of the features 
> that I liked the most was it's ability to to allow you to backup 
> directly to virtual 'tapes' that you could store on Amazon S3.  Does 
> bacula currently offer any feature like this? If so where might I find 
> the docs I googled to futility on this topic. If not, might it be an 
> appropriate item for the wish list of bacula features?
>
> I did come up with a kludgy work around where I mounted one of my S3 
> buckets to my local filesystem using a fuse based solution. However 
> it's dead slow and for some reason even tho I tried letting and s3sync 
> running all night I could not transfer my bacula virtual tapes to S3. 
> I was just also wondering if there was any logical reason as to why 
> that wasn't possible. I also tried doing an s3sync of my mysqldump 
> directory and that went to my S3 bucket without any problem and within 
> under an hour. So I wonder why the bacula virutal tapes would simply 
> not transfer?

I'm not sure of the status of the s3fs filesystem, but I wouldn't 
consider it kludgy. Bacula writes virtual tapes to a mounted filesystem 
and doesn't care whether the filesystem is ext3, s3fs, or whatever. The 
details of the actual i/o are left up to the OS and the OS filesystem 
driver as they should be. Grafting the S3 protocol into Bacula is not 
appropriate, since any user-mode ap should be able to write to the s3fs 
mount in the normal :everything is a file" way.

How large is your mysqldump vs. the virtual tape file that is failing? 
Writing to S3 may just be very slow. The way to handle that in Bacula is 
to turn on spooling so that job data is first cached on local disk, 
allowing the client to complete its writing in a timely manner, and then 
bacula-sd will in the background de-spool the cached data to virtual 
tape(s) located on the mounted s3fs filesystem. I would first try a 
simple copy (cp) of a virtual tape file to the s3fs mount. Perhaps it is 
just very, very slow, but if cp can't do it, then perhaps there is some 
problem on the S3 end, your ISP is throttling your connection, or a 
problem with the s3fs filesystem that the s3fs devs should be made aware of.


------------------------------------------------------------------------------
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users

<Prev in Thread] Current Thread [Next in Thread>