I’m curious if anyone is doing this and has had success.
I am currently testing backups and restores mounting an Amazon S3 bucket using S3ql v 1.18.1-1:
I have been able to successfully backup and restore smaller files 1G to 15G and have successfully backed up a large file, 93G
to the S3 bucket. However restores, to the same host or a different host on the same subnet, have failed multiple times. The
failure is strange. The job just hangs indefinitely, just stops restoring as evidenced by the large file being restored no longer growing
in size.
I’m wondering if I am running into some kind of limit with bacula using s3ql and large files? Has anyone had any success with this?
list jobs also has R status
status/storage/file shows this:
backup-slave02-sd Version: 5.2.5 (26 January 2012) x86_64-pc-linux-gnu ubuntu 12.04
Daemon started 23-Jul-14 19:24. Jobs: run=1, running=0.
Heap: heap=270,336 smbytes=474,925 max_bytes=475,119 bufs=750 max_bufs=752
Sizes: boffset_t=8 size_t=8 int32_t=4 int64_t=8 mode=0,0
Running Jobs:
Reading: Full Restore job RestoreFiles JobId=50560 Volume="filestorage0706"
pool="Default" device="FileStorage" (/backups/s32/)
Files=0 Bytes=0 Bytes/sec=0
FDReadSeqNo=6 in_msg=6 out_msg=1709093 fd=6
====
Device status:
Device "FileStorage" (/backups/s32/) is mounted with:
Volume: filestorage0706
Pool: Disk
Media type: File
Total Bytes Read=2,380,299,264 Blocks Read=36,897 Bytes/block=64,512
Positioned at File=0 Block=2,380,234,965
====
Used Volume status:
filestorage0706 on device "FileStorage" (/backups/s32/)
Reader=1 writers=0 devres=0 volinuse=1
filestorage0701 read volume JobId=50560
filestorage0702 read volume JobId=50560
filestorage0703 read volume JobId=50560
filestorage0704 read volume JobId=50560
filestorage0705 read volume JobId=50560
filestorage0706 read volume JobId=50560
filestorage0707 read volume JobId=50560
filestorage0708 read volume JobId=50560
filestorage0709 read volume JobId=50560