Networker

Re: [Networker] Drive speed?

2011-12-09 16:30:32
Subject: Re: [Networker] Drive speed?
From: Eddie Albert <Eddie.Albert AT CITIZENSFLA DOT COM>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Fri, 9 Dec 2011 16:29:37 -0500
The other thing that may be effecting the speed is the TYPE of files
being backed up. The easiest example to remember for me is *.pst files.
If the file is not locked, networker will try to wait for the file to be
done writing to back it up. Food for thought, hopefully no one gets food
poisoning?

Have a great weekend everyone.

Semper fidelis et paratus, /ALE

-----Original Message-----
From: EMC NetWorker discussion [mailto:NETWORKER AT LISTSERV.TEMPLE DOT EDU] On
Behalf Of George Sinclair
Sent: Friday, December 09, 2011 4:18 PM
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Subject: Re: [Networker] Drive speed?

On 2011-12-09 15:37, Eddie Albert wrote:
> What is the purpose of tweaking the job for performance? Is the
purpose
> to backup asap to release your business application?
>

I was really just trying to review what sending a second save set to the

same drive will achieve in terms of the one that's already running to 
that same drive but is going slowly. Apparently, sending additional save

sets can not only up the speed of the drive, but in the process, it will

also make the slow one more efficient since the drive is now streaming.

The slow save set is listed as an enumerated save set under the NSR 
client resource as: /pathname/blah-blah-blah. We usually get pretty good

write speeds, but this one was running at a time when there was little 
else going on, so that may be why it was running so slowly since there 
was nothing else writing to the drive. Also, there could be a ton of 
inodes under there, fragmentation, etc. This is the first time that I've

backed it up on that system, and that client is faster than the one 
where it previously lived. But then again, I'm usually not watching the 
write speeds since these normally run after hours. But when I then 
started another different save set (manually, from the client), the 
write speed on the drive ramped way up. It's not surprising, but I was 
mainly asking how that will affect the write speed on the original save 
set that was writing slowly to that same drive. I should have checked it

using nmc since it will show you the Rate (KB/S) for each save set, but 
I didn't do that. I was just looking at the overall write speed for the 
drive.

> What does the backup configuration look like?
> Saveset=all
> or
> Saveset=C:\
> Saveset=D:\
> Saveset=E:\
> Saveset=F:\
> Saveset=G:\
> Saveset=H:\
> Saveset=I:\
>
> If you can't tweak the saveset/performance consider changing out the
> backup device to DataDomain or Avamar?

I'll have to check to see why it was still running when I came in this 
morning. It might be that the group didn't start that save set until 
very late, and maybe by then, most of the others were done so the drive 
slowed up. There are some paths that have a huge number of inodes, and 
while not terribly large, they can take a long time to back up - much 
larger than a similar sized file system with fewer files. Usually, the 
performance is acceptable as most of the data is running concurrently 
with other save sets, and the drives stream OK.

>
> The answer to my first question, is the justification for my last
> comment above.
>
> Semper fidelis et paratus, /ALE
>
>
> -----Original Message-----
> From: EMC NetWorker discussion [mailto:NETWORKER AT LISTSERV.TEMPLE DOT EDU]
On
> Behalf Of Brian O'Neill
> Sent: Friday, December 09, 2011 2:52 PM
> To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
> Subject: Re: [Networker] Drive speed?
>
> On 12/9/2011 2:37 PM, George Sinclair wrote:
>> On 2011-12-09 14:07, Chiravuri, Sri, GDH Consulting/US wrote:
>>> Excellent point. If this is the case - saving same client/saveset
> twice
>>> (in parallel) could also bounce the thruput up ?
>>
>> Interesting theory. If it's true that both benefit - and, yes, I can
> see
>> that - then what if you were backing up 50 GB of data for a single
>> stream that might otherwise take 45 minutes, let's just say. But, if
> you
>> instead backed the same save set up twice (in parallel), could it
> still
>> finish faster than 45 minutes? Hmmm ...
>>
>> George
>
> It really depends on the client and network performance, but I
actually
> think this won't work at all. The client couldn't keep up to the tape
> drive speed for some reason - either the disk is slow, or the I/O
> throughput of the system, or the network connection - all factors that
> probably would not change no matter how many save streams are running
> off that single box.
>
> That, and I'm not sure how networker would react to trying to perform
> the same backup at the same time.
>
> You are much better off trying to schedule the slower systems to run
in
> parallel with other systems.
>
> Also, even if you could do this, now you need to back up twice as
much,
> and could still take longer than the ideal speed.
>
>>
>>>
>>> Thx
>>> Sri
>>>
>>> -----Original Message-----
>>> From: EMC NetWorker discussion
[mailto:NETWORKER AT LISTSERV.TEMPLE DOT EDU]
> On
>>> Behalf Of Brian O'Neill
>>> Sent: Friday, December 09, 2011 1:04 PM
>>> To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
>>> Subject: Re: [Networker] Drive speed?
>>>
>>> The slow speed on the one stream is likely because your
> client-to-server
>>> data stream is below the minimum threshold of the tape drive to
> prevent
>>> it from having to write, pause, back up, spin back up to speed,
> write,
>>> pause, etc. So the total throughput you are seeing is low because of
> the
>>> time spent not actually writing data to the tape.
>>>
>>> The second stream brings your client-to-server data throughput over
> the
>>> minimum of the drive, so the tape drive doesn't have to stop,
rewind,
>>> start again, lather, rinse, repeat.
>>>
>>> Both streams should be benefitting - the first stream was penalized
> by
>>> flow control while the server waited for the tape drive. Now it
> doesn't
>>> need to.
>>>
>>> On 12/9/2011 10:51 AM, George Sinclair wrote:
>>>> A basic question here on drive speed, but maybe not a simple answer
> as
>>>
>>>> there are undoubtedly numerous variables involved.
>>>>
>>>> Let's say you have an LTO-4 drive (SAS connection to the tape
> library)
>>>
>>>> with a single stream (one save set) clocking in around 4-10 MB/sec,
>>>> coming in over the network. You then start another backup (also a
>>>> single
>>>> stream) from the same client to the same drive, and now it jumps up
> to
>>>> 70+ MB/sec, and remains at that speed until that second save set
>>>> completes, and then quiets down to 4-10 MB/sec again. I've seen
this
>>>> happen with a number of other streams, too, wherein running just
one
>>>> of them from that same client, concurrent with the already running
>>>> stream, cranks the speed up considerably, until it's done, at which
>>>> point the original stream is reported again to be running at the
> same
>>> slow pace.
>>>>
>>>> We all know that a drive will come closer to performing optimally
> when
>>>
>>>> you can keep it streaming, and you can do that by keeping its
buffer
>>>> full. OK, so having more concurrent streams - up to a point - will
>>>> improve drive performance, BUT does it affect the speed at which
the
>>>> slow stream runs?
>>>>
>>>> In other words, when the reported write speed jumps up to 70+
MB/sec
>>>> because you're now sending another stream (possibly one that
>>>> compresses well), is the original stream (possibly one that does
not
>>>> compress so
>>>> well) now increasing its write speed as a result? Or is it instead
> the
>>>
>>>> case that while the drive is now functioning more optimally, and
>>>> writing more data per second, that first (slow) stream is still
>>>> clunking along at its original speed, and sending more streams will
>>>> not increase the speed of any one of them?
>>>>
>>>> I'm inclined to think that the increase in speed is only affecting
> the
>>>
>>>> additional stream(s) and not that original one.
>>>>
>>>> Thanks.
>>>>
>>>> George
>>>>
>>>
>>> To sign off this list, send email to listserv AT listserv.temple DOT edu
and
>>> type "signoff networker" in the body of the email. Please write to
>>> networker-request AT listserv.temple DOT edu if you have any problems with
> this
>>> list. You can access the archives at
>>> http://listserv.temple.edu/archives/networker.html or via RSS at
>>> http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
>>>
>>
>>
>
> To sign off this list, send email to listserv AT listserv.temple DOT edu and
> type "signoff networker" in the body of the email. Please write to
> networker-request AT listserv.temple DOT edu if you have any problems with
this
> list. You can access the archives at
> http://listserv.temple.edu/archives/networker.html or
> via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
>
> To sign off this list, send email to listserv AT listserv.temple DOT edu and
type "signoff networker" in the body of the email. Please write to
networker-request AT listserv.temple DOT edu if you have any problems with this
list. You can access the archives at
http://listserv.temple.edu/archives/networker.html or
> via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
>


-- 
George Sinclair
Voice: (301) 713-3284 x210
- The preceding message is personal and does not reflect any official or

unofficial position of the United States Department of Commerce -
- Any opinions expressed in this message are NOT those of the US Govt. -

To sign off this list, send email to listserv AT listserv.temple DOT edu and
type "signoff networker" in the body of the email. Please write to
networker-request AT listserv.temple DOT edu if you have any problems with this
list. You can access the archives at
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the body of the email. Please write to 
networker-request AT listserv.temple DOT edu if you have any problems with this 
list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

<Prev in Thread] Current Thread [Next in Thread>