Amanda-Users

Re: Discrepancy between amrestore and dd speeds

2007-12-18 18:51:47
Subject: Re: Discrepancy between amrestore and dd speeds
From: Ryan Steele <rsteele AT archer-group DOT com>
To: amanda-users AT amanda DOT org
Date: Tue, 18 Dec 2007 18:40:35 -0500
It seems that changing the blocksize with "mt -f /dev/nst0 setblk NNNN" to anything other than what it defaulted to breaks things; I get errors much like the following:

18:06:39 read(3, 0x2b1a7dead010, 262144) = -1 EIO (Input/output error)
18:06:52 write(2, "amrestore: error reading file he"..., 57amrestore: error reading file header: Input/output error

So, I'm back at square one. My read speeds are topping out in the 5-6MB/s range, and manually setting the blocksize for amrestore doesn't seem to solve the issue. Where else in Amanda should I be looking for bottlenecks? As you may recall, simply using dd to dump from tape gets me in the 12MB/s range, so I would think it's something within Amanda, perhaps amrestore itself since my amdump hums along at 12MB/s. I checked out amrestore.c, but nothing jumped out at me... then again, my C is a bit rusty.

If there's any details I left out, or that need reiteration, please let me know. I've saved all my strace outputs if they'd be helpful to anyone. I think I'm starting to run out of tricks trying to pinpoint the cause of the slowdown with amrestore.
TIA,

Ryan

--
Ryan Steele
Systems Administrator
The Archer Group

Ryan Steele wrote:
Jeremy,

If I can make the process more efficient, then that's what I want to do. In the event of a catastrophe, time is money.

I have tracked down what I believe to be a bug; that is, if no blocksize is defined for amrestore, it defaults to 32K, even though the man page says "Amrestore should normally be able to determine the blocksize for tapes on its own and not need this parameter" - I wrote to the tape in 256K blocks. I straced it both with and without the explicit blocksize parameter, and I can see the read/write blocksizes differ in between the runs. This really ought to be fixed, both in amrestore's behavior, and in the man pages. (Version 2.5.1p1-2.1, the latest in Debian Etch. The source files do not differ from the original releases.)

However, setting that option seems (disappointingly) to not make any difference. I ran strace with the -t option, and saw that in both cases, the data transfer rate was about the same - between 5 and 6 MB/s, even though the block sizes were different. This kind of makes me think that's not my bottleneck, but any informed opinions on the subject are welcome.

I'm not sure if this is of any consequence or not, but 'mt -f /dev/nst0 status' tells me my 'Tape block size' is 1024 bytes; the max is just shy of 256K. I'll try setting that to both 0 and the max, and see if that generates any different results. Again, informed opinions welcome.

TIA for any insight,

Ryan

--
Ryan Steele Systems Administrator The Archer Group

Jeremy Mordkoff wrote:
I know this is no answer, so apologies in advance.

Why are you worried about restore speeds? Hopefully you will never need
it and if you do, will anyone really complain that it took a little
longer than the tape drive is capable of? My users are usually just
grateful that they can get anything back. Plus, if I'm too quick,
they'll ask more and more often :)

JLM



-----Original Message-----
From: owner-amanda-users AT amanda DOT org [mailto:owner-amanda-
users AT amanda DOT org] On Behalf Of Ryan Steele
Sent: Tuesday, December 18, 2007 1:16 PM
To: amanda-users AT amanda DOT org
Subject: Discrepancy between amrestore and dd speeds

Hello list,

I've been performance testing Amanda, trying to get the setup ready
for
prime-time, but I'm having an issue getting my read speeds from tape
to
get even to 6MB/s.  My writes are about 12.7MB/s, which is what the
tape
drive boasts (12-24MB/s normal and 2:1 compression, respectively).
But,
my reads seem to hover around 5.5MB/s.  I ran some dd tests, which
show
the data being pulled off at 11.7MB/s, and I'm not quite sure how to
tell what amrestore is doing that is causing the performance drop off.
Here are the tests I ran; any help is appreciated.

backup@amandaserver# dd if=/dev/nst0 of=/home/restores/foobar bs=256K
count=10000G
                                       10308+0 records in
                                       10307+0 records out
                                       2701918208 bytes (2.7 GB)
copied,
230.539 seconds, 11.7 MB/s


backup@amandaserver# amrestore -r /dev/nst0 hostname /foo/bar

backup@amandaserver# stat
/home/restores/hostname._foo_bar.20071216.0.1.RAW
                                       File:
`home/restores/hostname._foo_bar.20071216.0.1.RAW'
                                       Size: 2836922368      Blocks:
5546275    IO Block: 131072 regular file
                                       Device: 908h/2312d      Inode:
13          Links: 1
                                       Access: (0640/-rw-r-----)  Uid:
(   34/  backup)   Gid: (   34/  backup)
                                       Access: 2007-12-18
12:50:15.000000000 -0500
                                       Modify: 2007-12-18
12:58:40.000000000 -0500
                                       Change: 2007-12-18
12:58:40.000000000 -0500

So,  ~8 minutes for a 2.7GB file = ~5.7MB/s.  I saw similar
performance
when retrieving a DLE that spanned 6 chunks (~5.3MB/s)  No errors or
anything in my amrestore logs, just start and stop times.  I'm kind of
at a loss as to what would be causing this, unless my calculations (
size / time ) aren't valid?  Thanks.

Best Regards,
Ryan

--
Ryan Steele
Systems Administrator
The Archer Group


<Prev in Thread] Current Thread [Next in Thread>