Bacula-users

Re: [Bacula-users] Fwd: Restore from dead client

2014-09-11 08:14:37
Subject: Re: [Bacula-users] Fwd: Restore from dead client
From: Kern Sibbald <kern AT sibbald DOT com>
To: Martin Simmons <martin AT lispworks DOT com>
Date: Thu, 11 Sep 2014 14:11:48 +0200
On 09/10/2014 02:58 PM, Martin Simmons wrote:
>>>>>> On Tue, 09 Sep 2014 20:25:18 +0200, Kern Sibbald said:
>> On 09/09/2014 07:46 PM, Martin Simmons wrote:
>>> It looks like removing readfifo=yes will not help, because the restore code
>>> doesn't look at it.
>>>
>>> The restore will not work without a process already reading from the fifo.
>> The simplest Bacula restore doesn't use a fifo. That is what I am
>> recommending that he try.
> Yes, but it looks like his backup contains data of type FT_FIFO, so the
> restore will always try to write it back into a fifo.

Yes, good point.  That just proves the complexity and problems one can
have with a fifo.  It shows the advantage of using the bpipe plugin at
least one can later restore the file without the plugin.

Regards,
Kern

>
> __Martin
>
>
>>                            However it seems that the configuration is
>> quite complete (at least for me).
>> Best regards,
>> Kern
>>> __Martin
>>>
>>>
>>>>>>>> On Tue, 9 Sep 2014 10:57:59 -0400, Kenny Noe said:
>>>> Kern, et al....
>>>>
>>>> I tried another backup.  Here is my cliconfig.  I parred it down to just
>>>> the restore stuff.
>>>> #********************************************************************************
>>>> # bluewhale
>>>> #********************************************************************************
>>>>    Client {
>>>>       Name                   = bluewhale
>>>>       Address                = bluewhale.backup.bnesystems.com
>>>>       Catalog                = BS01-Catalog
>>>>       Password               = "xxxxxx"
>>>>       FileRetention          = 15 days
>>>>       JobRetention           = 15 days
>>>>       AutoPrune              = yes
>>>>       MaximumConcurrentJobs  = 1
>>>>    }
>>>>    Job {
>>>>       Name                   = Restore_mail_bluewhale
>>>>       FileSet                = Full_mail_bluewhale
>>>>       Type                   = Restore
>>>>       Pool                   = Pool_mail_bluewhale
>>>>       Client                 = bluewhale
>>>>       Messages               = Standard
>>>>    }
>>>>    Pool {
>>>>       Name                   = Pool_mail_bluewhale
>>>>       PoolType               = Backup
>>>>       Storage                = Storage_bluewhale
>>>>       MaximumVolumeJobs      = 1
>>>>       CatalogFiles           = yes
>>>>       AutoPrune              = yes
>>>>       VolumeRetention        = 1 week
>>>>       Recycle                = yes
>>>>       LabelFormat            = "mail-"
>>>>    }
>>>>    Storage {
>>>>       Name                   = Storage_bluewhale
>>>>       Address                = 10.10.10.199
>>>>       SDPort                 = 9103
>>>>       Password               = "imadirector"
>>>>       Device                 = File_bluewhale
>>>>       MediaType              = NAS_bluewhale
>>>>       MaximumConcurrentJobs  = 1
>>>>    }
>>>>    Schedule {
>>>>       Name                   = Schedule_mail_bluewhale
>>>>       Run                    = Level=Full sun-sat at 01:00
>>>>    }
>>>>    FileSet {
>>>>       Name = Full_mail_bluewhale
>>>>       Include {
>>>>          Options {
>>>>             signature=SHA1
>>>>          }
>>>>          File="mail.tar"
>>>>       }
>>>>    }
>>>>
>>>>
>>>> During the restore I ran status storage from the console.  I get this
>>>>
>>>> *status sto
>>>> The defined Storage resources are:
>>>>      1: Storage_asterisk
>>>>      2: Storage_besc-4dvapp
>>>>      3: Storage_besc-bs01
>>>>      4: Storage_besc-unixmgr01
>>>>      5: Storage_bluewhale
>>>>      6: Storage_demo
>>>>      7: Storage_dev
>>>>      8: Storage_mako
>>>> Select Storage resource (1-8): 5
>>>> Connecting to Storage daemon Storage_bluewhale at 10.10.10.199:9103
>>>>
>>>> BS01-SD1 Version: 5.2.2 (26 November 2011) x86_64-unknown-linux-gnu ubuntu
>>>> 11.10
>>>> Daemon started 09-Sep-14 09:19. Jobs: run=1, running=0.
>>>>  Heap: heap=598,016 smbytes=386,922 max_bytes=405,712 bufs=947 max_bufs=949
>>>>  Sizes: boffset_t=8 size_t=8 int32_t=4 int64_t=8 mode=0,0
>>>>
>>>> Running Jobs:
>>>> Reading: Full Restore job Restore_mail_bluewhale JobId=12922
>>>> Volume="mail-0386"
>>>>     pool="Pool_mail_bluewhale" device="File_bluewhale"
>>>> (/nas/bacula/bluewhale)
>>>>     Files=0 Bytes=0 Bytes/sec=0
>>>>     FDReadSeqNo=6 in_msg=6 out_msg=2320699 fd=6
>>>> ====
>>>>
>>>> Jobs waiting to reserve a drive:
>>>> ====
>>>>
>>>> Terminated Jobs:
>>>>  JobId  Level    Files      Bytes   Status   Finished        Name
>>>> ===================================================================
>>>>  12913  Incr        108    73.85 M  OK       08-Sep-14 20:02 Backup_os_dev
>>>>  12912  Full          4    61.80 G  OK       08-Sep-14 20:24 
>>>> Backup_app_demo
>>>>  12914  Incr        230    57.85 M  OK       09-Sep-14 00:00
>>>> Backup_os_asterisk
>>>>  12916  Incr         31    68.06 M  OK       09-Sep-14 00:01
>>>> Backup_os_besc-unixmgr01
>>>>  12917  Incr          0         0   Cancel   09-Sep-14 00:03
>>>> Backup_os_bluewhale
>>>>  12918  Full          4    501.3 M  OK       09-Sep-14 00:04 Backup_app_dev
>>>>  12915  Incr        256    1.099 G  OK       09-Sep-14 00:06
>>>> Backup_os_besc-bs01
>>>>  12919  Full          4    54.41 G  OK       09-Sep-14 01:04 
>>>> Backup_app_mako
>>>>  12920                0         0   Cancel   09-Sep-14 09:17
>>>> Restore_mail_bluewhale
>>>>  12921                0         0   OK       09-Sep-14 09:43
>>>> Restore_mail_bluewhale
>>>> ====
>>>>
>>>> Device status:
>>>> Device "File_asterisk" (/nas/bacula/asterisk) is not open.
>>>> Device "File_besc-4dvapp" (/nas/bacula/besc-4dvapp) is not open.
>>>> Device "File_besc-bs01" (/nas/bacula/besc-bs01) is not open.
>>>> Device "File_besc-unixmgr01" (/nas/bacula/besc-unixmgr01) is not open.
>>>> Device "File_bluewhale" (/nas/bacula/bluewhale) is mounted with:
>>>>     Volume:      mail-0386
>>>>     Pool:        *unknown*
>>>>     Media type:  NAS_bluewhale
>>>>     Total Bytes Read=17,923,756,032 Blocks Read=277,836 Bytes/block=64,512
>>>>     Positioned at File=4 Block=743,886,199
>>>> Device "File_demo" (/nas/bacula/demo) is not open.
>>>> Device "File_dev" (/nas/bacula/dev) is not open.
>>>> Device "File_mako" (/nas/bacula/mako) is not open.
>>>> Device "File_qa" (/nas/bacula/qa) is not open.
>>>> Device "File_qa2" (/nas/bacula/qa2) is not open.
>>>> Device "File_smart" (/nas/bacula/smart) is not open.
>>>> ====
>>>>
>>>> Used Volume status:
>>>> mail-0386 on device "File_bluewhale" (/nas/bacula/bluewhale)
>>>>     Reader=1 writers=0 devres=0 volinuse=1
>>>> mail-0386 read volume JobId=12922
>>>> ====
>>>>
>>>> ====
>>>>
>>>>
>>>>
>>>> Why is Pool "*unknown*??  Device status show Total Bytes Read increasing
>>>> each time I complete a status stirage check, BUT under the "Reading"
>>>> section it shows "Bytes=0" and Bytes/sec=0"
>>>>
>>>> Finally the "job" completes after approx 25 minutes and the log captures
>>>> this
>>>>
>>>>
>>>> 09-Sep 10:11 BS01-DIR1 JobId 12922: Start Restore Job
>>>> Restore_mail_bluewhale.2014-09-09_10.11.16_04
>>>> 09-Sep 10:11 BS01-DIR1 JobId 12922: Using Device "File_bluewhale"
>>>> 09-Sep 10:11 BS01-SD1 JobId 12922: Ready to read from volume "mail-0386" on
>>>> device "File_bluewhale" (/nas/bacula/bluewhale).
>>>> 09-Sep 10:11 BS01-SD1 JobId 12922: Forward spacing Volume "mail-0386" to
>>>> file:block 0:219.
>>>> 09-Sep 10:33 BS01-SD1 JobId 12922: End of Volume at file 28 on device
>>>> "File_bluewhale" (/nas/bacula/bluewhale), Volume "mail-0386"
>>>> 09-Sep 10:33 BS01-SD1 JobId 12922: End of all volumes.
>>>> 09-Sep 10:12 BS01-FD1 JobId 12922: Error: create_file.c:292 Could not open
>>>> /tmp/data/backups/mail/fifo/mail.tar: ERR=Interrupted system call
>>>> 09-Sep 10:33 BS01-DIR1 JobId 12922: Bacula BS01-DIR1 5.2.2 (26Nov11):
>>>>   Build OS:               x86_64-unknown-linux-gnu ubuntu 11.10
>>>>   JobId:                  12922
>>>>   Job:                    Restore_mail_bluewhale.2014-09-09_10.11.16_04
>>>>   Restore Client:         besc-bs01
>>>>   Start time:             09-Sep-2014 10:11:18
>>>>   End time:               09-Sep-2014 10:33:31
>>>>   Files Expected:         1
>>>>   Files Restored:         0
>>>>   Bytes Restored:         0
>>>>   Rate:                   0.0 KB/s
>>>>   FD Errors:              0
>>>>   FD termination status:  OK
>>>>   SD termination status:  OK
>>>>   Termination:            Restore OK -- warning file count mismatch
>>>>
>>>> 09-Sep 10:33 BS01-DIR1 JobId 12922: Begin pruning Jobs older than 15 days .
>>>> 09-Sep 10:33 BS01-DIR1 JobId 12922: No Jobs found to prune.
>>>> 09-Sep 10:33 BS01-DIR1 JobId 12922: Begin pruning Files.
>>>> 09-Sep 10:33 BS01-DIR1 JobId 12922: No Files found to prune.
>>>> 09-Sep 10:33 BS01-DIR1 JobId 12922: End auto prune.
>>>>
>>>>
>>>> Thoughts??
>>>>
>>>> Thanks ----Kenny
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On Tue, Sep 9, 2014 at 9:56 AM, Kern Sibbald <kern AT sibbald DOT com> 
>>>> wrote:
>>>>
>>>>>  Hello,
>>>>>
>>>>> I would remove the
>>>>>
>>>>>    readfifo=yes
>>>>>
>>>>> though I am not 100% sure it is used on a restore.
>>>>>
>>>>> Then simply restore the file "mail.tar" making absolutely sure you have
>>>>> not marked any directories for restore.  Do the restore to /tmp. Then you
>>>>> will have the mail.tar file that you can detar manually to get your files
>>>>> back.
>>>>>
>>>>> Best regards,
>>>>> Kern
>>>>>
>>>>> On 09/09/2014 03:25 PM, Kenny Noe wrote:
>>>>>
>>>>> Ana,  Hi!  Thanks for the reply...   I get the same error no matter where
>>>>> I try and write to.  I've tried to a remote NAS and to local /tmp.
>>>>>
>>>>>  Kern,  below is my Fileset.  Should I remove the "Include" statment?
>>>>>
>>>>>   FileSet {
>>>>>       Name = Full_mail_bluewhale
>>>>>       Include {
>>>>>          Options {
>>>>>             signature=SHA1
>>>>>             readfifo=yes
>>>>>          }
>>>>>          File="/data/backups/mail/fifo/mail.tar"
>>>>>       }
>>>>>    }
>>>>>
>>>>>
>>>>>  Thank you all for your comments so far.  I'm still trying to complete
>>>>> the restore, so any input would be appreciated.
>>>>>
>>>>>  Sincerely,
>>>>> --Kenny
>>>>>
>>>>>
>>>>>  ...
>>>>>


------------------------------------------------------------------------------
Want excitement?
Manually upgrade your production database.
When you want reliability, choose Perforce
Perforce version control. Predictably reliable.
http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users

<Prev in Thread] Current Thread [Next in Thread>