Hi Julian,
Here's the info for that filesystem. I also just tried my 100Gb test, which
fails both on the filesystem itself, and the snapshot. I don't have problems
with 1Gb files either...
NAME PROPERTY VALUE
SOURCE
rpool/vm2 type filesystem -
rpool/vm2 creation Fri Nov 6 14:47 2009 -
rpool/vm2 used 116G -
rpool/vm2 available 751G -
rpool/vm2 referenced 116G -
rpool/vm2 compressratio 1.00x -
rpool/vm2 mounted yes -
rpool/vm2 quota none
default
rpool/vm2 reservation none
default
rpool/vm2 recordsize 128K
default
rpool/vm2 mountpoint /rpool/vm2
default
rpool/vm2 sharenfs rw,root=vmsrv2,anon=0 local
rpool/vm2 checksum on
default
rpool/vm2 compression off
default
rpool/vm2 atime on
default
rpool/vm2 devices on
default
rpool/vm2 exec on
default
rpool/vm2 setuid on
default
rpool/vm2 readonly off
default
rpool/vm2 zoned off
default
rpool/vm2 snapdir hidden
default
rpool/vm2 aclmode groupmask
default
rpool/vm2 aclinherit restricted
default
rpool/vm2 canmount on
default
rpool/vm2 shareiscsi off
default
rpool/vm2 xattr on
default
rpool/vm2 copies 1
default
rpool/vm2 version 3 -
rpool/vm2 utf8only off -
rpool/vm2 normalization none -
rpool/vm2 casesensitivity sensitive -
rpool/vm2 vscan off
default
rpool/vm2 nbmand off
default
rpool/vm2 sharesmb off
default
rpool/vm2 refquota none
default
rpool/vm2 refreservation none
default
rpool/vm2 primarycache all
default
rpool/vm2 secondarycache all
default
rpool/vm2 usedbysnapshots 14.9M -
rpool/vm2 usedbydataset 116G -
rpool/vm2 usedbychildren 0 -
rpool/vm2 usedbyrefreservation 0 -
rpool/vm2 org.opensolaris.caiman:install ready
inherited from rpool
On 2009-12-29, at 4:11 PM, Fahrer, Julian wrote:
> Hey paul,
>
> i don't have enough space on the test system right now. Just created a new
> zfs without compression/dedup and a 1gb file on an solaris 10u6 system.
> I could backup and restore from a snapshot without errors.
>
> Could u post your zfs config?
> zfs get all <zfs-name>
>
> Julian
>
> -----Ursprüngliche Nachricht-----
> Von: Paul Greidanus [mailto:paul.greidanus AT gmail DOT com]
> Gesendet: Dienstag, 29. Dezember 2009 23:00
> An: Fahrer, Julian
> Cc: bacula-users AT lists.sourceforge DOT net
> Betreff: Re: AW: [Bacula-users] Cannot restore VMmware/ZFS
>
> Solaris is OpenSolaris 2009.06 and I don't think I have anything specifically
> enabled with compression or dedup enabled.
>
> Can you try backing up and restoring a 100Gb file full of zeros from a
> snapshot?
>
> Paul
>
> On 2009-12-29, at 2:53 PM, Fahrer, Julian wrote:
>
>> What solaris are u using?
>> Is zfs compression/ dedup enabled?
>> Maybe I could run some test for u. I had no problems with zfs so far
>>
>>
>> -----Ursprüngliche Nachricht-----
>> Von: Paul Greidanus [mailto:paul.greidanus AT gmail DOT com]
>> Gesendet: Dienstag, 29. Dezember 2009 21:52
>> An: bacula-users AT lists.sourceforge DOT net
>> Betreff: Re: [Bacula-users] Cannot restore VMmware/ZFS
>>
>>
>> On 2009-12-28, at 6:56 PM, Marc Schiffbauer wrote:
>>
>>> * Paul Greidanus schrieb am 28.12.09 um 23:44 Uhr:
>>>> I'm trying to restore files I have backed up on the NFS server that I'm
>>>> using to back VMware, but I'm getting similar errors to this every time I
>>>> try to restore:
>>>>
>>>> 28-Dec 12:10 krikkit-dir JobId 1433: Start Restore Job
>>>> Restore.2009-12-28_12.10.28_54
>>>> 28-Dec 12:10 krikkit-dir JobId 1433: Using Device "TL2000-1"
>>>> 28-Dec 12:10 krikkit-sd JobId 1433: 3307 Issuing autochanger "unload slot
>>>> 11, drive 0" command.
>>>> 28-Dec 12:11 krikkit-sd JobId 1433: 3304 Issuing autochanger "load slot 3,
>>>> drive 0" command.
>>>> 28-Dec 12:12 krikkit-sd JobId 1433: 3305 Autochanger "load slot 3, drive
>>>> 0", status is OK.
>>>> 28-Dec 12:12 krikkit-sd JobId 1433: Ready to read from volume "000009L4"
>>>> on device "TL2000-1" (/dev/rmt/0n).
>>>> 28-Dec 12:12 krikkit-sd JobId 1433: Forward spacing Volume "000009L4" to
>>>> file:block 473:0.
>>>> 28-Dec 12:15 krikkit-sd JobId 1433: Error: block.c:1010 Read error on fd=4
>>>> at file:blk 475:0 on device "TL2000-1" (/dev/rmt/0n). ERR=I/O error.
>>>> 28-Dec 12:15 krikkit-sd JobId 1433: End of Volume at file 475 on device
>>>> "TL2000-1" (/dev/rmt/0n), Volume "000009L4"
>>>> 28-Dec 12:15 krikkit-sd JobId 1433: End of all volumes.
>>>> 28-Dec 12:15 krikkit-fd JobId 1433: Error: attribs.c:423 File size of
>>>> restored file
>>>> /backupspool/rpool/vm2/.zfs/snapshot/backup/InvidiCA/InvidiCA-flat.vmdk
>>>> not correct. Original 8589934592, restored 445841408.
>>>>
>>>> Files are backed up from a zfs snapshot which is created just before the
>>>> backup starts. Every other file I am attempting to restore works just
>>>> fine...
>>>>
>>>> Is anyone out there doing ZFS snapshots for VMware, or backing up NFS
>>>> servers that have .vmdk files on it?
>>>
>>> No, but I could imagine that this might have something to do with
>>> some sparse-file setting.
>>>
>>> Have you checked how much space of your 8GB flat vmdk is aktually being
>>> used? Maybe this was 445841408 Bytes at backup time?
>>>
>>> Does the same happen if you do not use pre-allocated vmdk-disks?
>>> (Which is better anyway most of the times if you use NFS instead of vmfs)
>>>
>>
>> All I use is preallocated disks especially on NFS.. I don't think I can
>> actually use sparse disks on NFS.
>>
>> As a test, I created a 100Gb file from /dev/zero, and tried backing that up
>> and restoring it, and I get this:
>>
>> 29-Dec 13:45 krikkit-sd JobId 1446: Error: block.c:1010 Read error on fd=4
>> at file:blk 13:0 on device "TL2000-1" (/dev/rmt/0n). ERR=I/O error.
>> 29-Dec 13:45 krikkit-sd JobId 1446: End of Volume at file 13 on device
>> "TL2000-1" (/dev/rmt/0n), Volume "000010L4"
>> 29-Dec 13:45 krikkit-sd JobId 1446: End of all volumes.
>> 29-Dec 13:46 filer2-fd JobId 1446: Error: attribs.c:423 File size of
>> restored file /scratch/rpool/vm2/.zfs/snapshot/backup/100GbTest not correct.
>> Original 66365161472, restored 376340827.
>>
>> So, this tells me that whatever's going on, it's not Vmware that's causing
>> me the troubles.. I'm wondering if I'm running into problems with ZFS
>> snapshot backups, or just something with large files and Bacula?
>>
>> Paul
>> ------------------------------------------------------------------------------
>> This SF.Net email is sponsored by the Verizon Developer Community
>> Take advantage of Verizon's best-in-class app development support
>> A streamlined, 14 day to market process makes app distribution fast and easy
>> Join now and get one step closer to millions of Verizon customers
>> http://p.sf.net/sfu/verizon-dev2dev
>> _______________________________________________
>> Bacula-users mailing list
>> Bacula-users AT lists.sourceforge DOT net
>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
>
> ------------------------------------------------------------------------------
> This SF.Net email is sponsored by the Verizon Developer Community
> Take advantage of Verizon's best-in-class app development support
> A streamlined, 14 day to market process makes app distribution fast and easy
> Join now and get one step closer to millions of Verizon customers
> http://p.sf.net/sfu/verizon-dev2dev
> _______________________________________________
> Bacula-users mailing list
> Bacula-users AT lists.sourceforge DOT net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
------------------------------------------------------------------------------
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users
|